Older blog entries for mhausenblas (starting at number 49)

JSON, data and the REST

Tomorrow, on 8.8. is the International JSON day. Why? Because I say so!

Is there a better way to say ‘thank you’ to a person who gave us so much – yeah, I’m talking about Doug Crockford – and to acknowledge how handy, useful and cool the piece of technology is, this person ‘discovered‘?

From its humble beginning some 10 years ago, JSON is now the light-weight data lingua franca. From the nine Web APIs I had a look at recently in the REST: From Research to Practice book, seven offered their data in JSON. These days it is possible to access and process JSON data from virtually any programming language – check out the list at json.org if you doubt that. I guess the raise of JSON and its continuing success story is at least partially due to its inherent simplicity – all you get are key/value pairs and lists. And in 80% or more of the use cases that is likely all you need. Heck, even I prefer to consume JSON in my Web applications over any sort of XML-based data sources or any given RDF serialization.

But the story doesn’t end here. People and organisations nowadays in fact take JSON as a given basis and either try to make it ‘better’ or to leverage it for certain purposes. Let’s have a look at three of these examples …

JSON Schema

I reckon one of the first and most obvious things people where discussing once JSON reached a certain level of popularity was how to validate JSON data. And what do we do as good engineers? We need to invent a schema language, for sure! So, there you go: json-schema.org tries to establish a schema language for JSON. The IETF Internet draft by Kris Zyp states:

JSON Schema provides a contract for what JSON data is required for a given application and how to interact with it. JSON Schema is intended to define validation, documentation, hyperlink navigation, and interaction control of JSON data.

One rather interesting bit, beside the obvious validation use case, is the support for ‘hyperlink navigation’. We’ll come back to this later.

Atom-done-right: OData

I really like the Atom format as well as the Atom Publishing Protocol (APP). A classic, in REST terms. I just wonder, why on earth is it based on XML?

Enter OData. Microsoft, in a very clever move adopted Atom and the APP and made it core of OData; but they didn’t stop there – Microsoft is using JSON as one of the two official formats for OData. They got this one dead right.

OData is an interesting beast, because here we find one attempt to address one of the (perceived) shortcomings of JSON – it is not very ‘webby’. I hear you saying: ‘Hu? What’s that and why does this matter?’ … well it matters to some of us RESTafarians who respect and apply HATEOAS. Put short: as JSON uses a rather restricted ‘data type’ system, there is no explicit support for URIs and (typed) links. Of course you can use JSON to represent and transport a URI (or many, FWIW). But the way you choose to represent, say, a hyperlink might look different from the way I or someone else does, meaning that there is no interoperability. I guess, as long as HATEOAS is a niche concept, not grokked by many people, this might not be such a pressing issue, however, there are cases were it is vital to be able to unambiguously deal with URIs and (typed) links. More in the next example …

Can I squeeze a graph into JSON? Sir, yes, Sir!

Some time ago Manu Sporny and others started an activity called JSON-LD (JavaScript Object Notation for Linking Data) that gained some movement over the past year or so; as time of writing support for some popular languages incl. C++, JavaScript, Ruby and Python is available. JSON-LD is designed to be able to express RDF, microformats as well as Microdata. With the recent introduction of Schema.org, this means JSON-LD is something you might want to keep on your radar …

On a related note: initially, the W3C planned to standardize how serialize RDF in JSON. Once the respective Working Group was in place, this was dropped. I think they made a wise decision. Don’t get me wrong, I’d have also loved to get out an interoperable way to deal with RDF in JSON, and there are certainly enough ways how one could do it, but I guess we’re simply not yet there. And JSON-LD? Dunno, to be honest – I mean I like and support it and do use it, very handy, indeed. Will it be the solution for HATEOAS and Linked Data. Time will tell.

Wrapping up: JSON is an awesome piece of technology, largely due to its simplicity and universality and, we should not forget: due to a man who rightly identified its true potential and never stopped telling the world about it.

Tomorrow, on 8.8. is the International JSON day. Join in, spread the word and say thank you to Doug as well!


Filed under: Announcement, Big Data, Cloud Computing, IETF, Linked Data, NoSQL, W3C

Syndicated 2011-08-07 07:51:12 from Web of Data

Towards Networked Data

This is the second post in the solving-tomorrow’s-problems-with-yesterday’s-tools series.

In his seminal article If You Have Too Much Data, then “Good Enough” Is Good Enough Pat calls for a ‘new theory for data’ – I’d like to call this: networked data (meaning: consuming and manipulating distributed data on a Web-scale).

In this post, now, I’m going to elaborate on the first of his points in the context of Linked Data:

We need a new theory and taxonomy of data that must include:

  • Identity and versions. Unlocked data comes with identity and optional versions.

If you take a 10,000 feet view on the Linked Data principles it reads essentially as follows (the stuff in bold is what I added, here):

  1. Use URIs as names for things – entity identity
  2. Use HTTP URIs so that people can look up those names – entity access
  3. When someone looks up a URI, provide useful information, using the standards – entity structure
  4. Include links to other URIs. so that they can discover more things – entity integration

One word of caution before we dive into it: Linked Data, as we talk is pretty well-defined for the read-only case (the write-enabled case is still subject to research and standardisation).

If you compare the Linked Data principles from above with what Pat demands from the ‘new theory for data’, I think it is fair to state that the entity identity part as well as the entity access part is well covered. The versioning part might be a bit tricky, but doable – for example with Named Graphs, quads, etc.

Concerning the entity structure it occurs to me that there are two schools of thought: ‘purists’ who demand that only RDF serialisations are allowed for representing an entity’s structure on the one hand and the more liberal interpretation which includes technologies such as OData and only recently (triggered through the introduction of Schema.org) also Microdata, on the other hand. Time will tell uptake and success of any of the mentioned technologies, but in doubt I prefer to be inclusive rather than exclusive concerning this question.

The entity integration part is not explicitly mentioned by Pat – I wonder why? ;)


Filed under: FYI, Linked Data, NoSQL

Syndicated 2011-06-08 08:03:36 from Web of Data

Ye shall not DELETE data!

This is the first post in the solving-tomorrow’s-problems-with-yesterday’s-tools series.

Alex Popescu recently reviewed a post by Mikayel Vardanyan on Picking the Right NoSQL Database Tool and was puzzled about the following of Mikayel’s statement:

[Relational database systems] allow versioning or activities like: Create, Read, Update and Delete. For databases, updates should never be allowed, because they destroy information. Rather, when data changes, the database should just add another record and note duly the previous value for that record.

I don’t find it puzzling at all. As Pat Helland rightly says:

In large-scale systems, you don’t update data, you add new data or create a new version.

OK, I guess arguing this on an abstract level serves nobody. Let’s get our hands dirty and have a look at a concrete example. I pick an example from the Linked Data world, but there is nothing really specific to it – it just happens to be the data language I speak and dream in ;)

Look at the following piece of data:

… and now let’s capture the fact that my address has changed …

This looks normal at first sight, but there are two drawbacks attached with it:

  1. If I ask the question: ‘Where has Michael been living previously?’, I can’t get an answer anymore once the update has been performed, unless I have a local copy of the old data piece.
  2. Whenever I ask the question: ‘Where does Michael live?’ I need to implicitly add ‘at the moment’, as the information is not scoped.

There are few ways one can deal with it, though. And as a consequence, here is what I demand:

  • Never ever DELETE data – it’s slow and lossy; also updating data is not good, as UPDATE is essentially DELETE + INSERT and hence lossy as well.
  • Each piece of data must be versioned – in the Linked Data world one could, for example, use quads rather than triples to capture the context of the assertion expressed in the data.

Oh, BTW, my dear colleagues from the SPARQL Working Group – having said this, I think SPARQL Update is heading in the wrong direction. Can we still change this, pretty please?

PS: disk space is cheap these days, as nicely pointed out by Dorian Taylor ;)


Filed under: Big Data, Cloud Computing, Linked Data, NoSQL, Proposal, W3C

Syndicated 2011-05-29 09:07:39 from Web of Data

Solving tomorrow’s problems with yesterday’s tools

Q: What is the difference between efficiency and effectiveness?
A: 42.

Why? Well, as we all know, 42 is the answer to the ultimate question of life, the universe, and everything. But did you know that in 2012 it will be 42 years that Codd introduced ‘A Relational Model of Data for Large Shared Data Banks‘?

OK, now a more serious attempt to answer above question:

Efficiency is doing things right, effectiveness is doing the right thing.

This gem of wisdom has originally been coined by the marvelous Peter Drucker (in his book The Effective Executive – read it, worth every page) and nicely explains, IMO, what is going on: relational database systems are efficient. They are well suited for a certain type of problem: dealing with clearly-defined data in a rather static way. Are they effectively helping us to deal with big, messy data? I doubt so.

How comes?

Pat Helland’s recent ACM Queue article If You Have Too Much Data, then “Good Enough” Is Good Enough offers us some very digestible and enlightening insights why SQL struggles with big data:

We can no longer pretend to live in a clean world. SQL and its Data Definition Language (DDL) assume a crisp and clear definition of the data, but that is a subset of the business examples we see in the world around us. It’s OK if we have lossy answers—that’s frequently what business needs.

… and also …

All data on the Internet is from the “past.” By the time you see it, the truthful state of any changing values may be different. [...] In loosely coupled systems, each system has a “now” inside and a “past” arriving in messages.

… and on he goes …

I observed that data that is locked (and inside a database) is seminally different from data that is unlocked. Unlocked data comes in clumps that have identity and versioning. When data is contained inside a database, it may be normalized and subjected to DDL schema transformations. When data is unlocked, it must be immutable (or have immutable versions).

These were just some quotes from Pat’s awesome paper. I really encourage you to read it yourself and discover maybe even more insights.

Coming back to the initial question: I think NoSQL is effective for big, messy data. It has yet to proof that it is efficient in terms of usability, optimization, etc. – due to the large number of competing solutions, the respective communities are smaller and more fragmented in NoSQLand, but I guess it will undergo a consolidation process in the next couple of years.

Summing up: let’s not try to solve tomorrow’s problems with yesterday’s tools.


Filed under: Big Data, Cloud Computing, FYI, NoSQL

Syndicated 2011-05-29 06:07:56 from Web of Data

Why we link …

The incentives to put structured data on the Web seem to slowly seep in, but why does it make sense to link your data to other data? Why to invest time and resources to offer 5 star data? Even though the interlinking itself becomes more of a commodity these days – for example, the 24/7 platform we’re deploying in LATC is an interlinking cloud offering – the motivation for dataset publisher to set links to other datasets is, in my experience, not obvious.

I think it’s important to have a closer look at the motivation for interlinking data on the Web from a data integration perspective. Traditionally, you would download data from, say, Infochimps or you find it via CKAN or via the many other places that either directly offer data or provide a data catalog. Then you would put it in your favorite (NoSQL) database and use it in your application. Simple, isn’t it?

Let’s say you’re using a dataset about companies such as the Central Contractor Registration (CCR) . These companies typically have a physical address (or: location) attached:

Now, imagine I ask you to render the location of a selection of companies on a map. This requires you to look up the geographical coordinates of a company in a service such as Geonames:

I bet you can automate this, right? Maybe a bit of manual work involved, but not too much, I guess. So, all is fine, right?

Not really.

The next developer that comes along and wants to use the company data and nicely map it has to go through the exact same process. Figure what geo service to use, write some look-up/glue code, import the data and so on.

Wouldn’t it make more sense, from a re-usability point of view, if the original dataset provider (CCR in our example) would have a look at its data and identify what entities (such as companies) are there and provide the links to other datasets (such as location data) up-front? This is, in a nutshell, what Tim says concerning the 5th star of Open Data deployment:

Link your data to other people’s data to provide context.

To sum up: if you have data, think about providing this context – link it to other data in the Web and you make your data more useful and more usable and, in the long run, more used.

PS: the working title of this blog post was ‘As we may link’, to render homage to Vannevar Bush, but then I thought that might be a bit too cheesy ;)


Filed under: FYI, Linked Data

Syndicated 2011-05-22 20:37:30 from Web of Data

Can NoSQL help us in processing Linked Data?

This is an announcement and call for feedback. Over the past couple of days I’ve compiled a short review article where I look into NoSQL solutions and to what extent they can be used to process Linked Data.

I’d like to extend and refine this article, but this only works if you share your experiences and let me know what I’m missing out and where I’m maybe totally wrong?

If youjust want to read it, use the following link: NoSQL solutions for Linked Data processing (read-only Web page).

If you want to provide feedback or rectify stuff I wrote, use: NoSQL solutions for Linked Data processing (Google Docs with discussion enabled).

Thanks, and enjoy reading as well as commenting on the article!


Filed under: Announcement, Linked Data

Syndicated 2011-05-02 20:30:55 from Web of Data

From CSV data on the Web to CSV data in the Web

In our daily work with Government data such as statistics, geographical data, etc. we often deal with Comma-Separated Values (CSV) files. Now, they are really handy as they are easy to produce and to consume: almost any language and platform I came across so far has some support for parsing CSV files and I can virtually export CSV files from any sort of (serious) application.

There is even a – probably not widely known – standard for CSV files (RFC 4180) that specifies the grammar and registers the normative MIME media type text/csv for CSV files.

So far so well.

From a Web perspective, CSV files really are data objects, which however are rather coarse-granular. If I want to use a CSV file, I always have to use the entire file. There is no agreed-upon concept that allows me to refer to a certain cell, row or column. This was my main motivation to start working on what I called Addrable (from Addressable Table) earlier this year. I essentially hacked together a rather simple implementation of Addrables in JavaScript that understands URI fragment identifiers such as:

  • #col:temperature
  • #row:10
  • #where:city=Galway,reporter=Richard

Let’s have a closer look at what the result of the processing of such a fragment identifier against an example CSV file could be. I’m going to use the last one in the list above, that is, addressing a slice where the city column has the value ‘Galway’ and for the reporter column we ask it to be ‘Richard’.

The client-side implementation in jQuery provides a visual rendering of the selected part, see below a screen-shot (if you want to toy around with it, either clone or download it and open it locally in your browser):

There is also a server-side implementation using node.js available (deployed at addrable.no.de), outputting JSON:

{
  "header":
    ["date","temperature"],
  "rows":
    [
      ["2011-03-01", "2011-03-02", "2011-03-03"],
      ["4","10","5"]
    ]
}

Note: the processing of the fragment identifier is meant to be performed by the User Agent after the retrieval action has been completed. However, the server-side implementation demonstrates a workaround for the fact that the fragment identifier is not sent to the Server (see also the related W3C document on Repurposing the Hash Sign for the New Web).

Fast forwarding a couple of weeks.

Now, having an implementation is fine, but why not pushing the envelope and taking it a step further, in order to help making the Web a better place?

Enter Erik Wilde, who did ‘URI Fragment Identifiers for the text/plain Media Type’ aka RFC 5147 some three years ago; and yes, I admit I was a bit biased already through my previous contributions to the Media Fragments work. We decided to join forces to work on ‘text/csv Fragment Identifiers’, based on the Addrable idea.

As a first step (well beside the actual writing of the Internet-Draft to be submitted to IETF) I had a quick look at what we can expect in terms of deployment. That is, a rather quick and naive survey based on some 60 CSV files manually harvested from the Web. The following figure gives you a rough idea what is going on:

To sum up the preliminary findings: almost half of the CSV files are (wrongly) served with text/plain (followed by some other non-conforming and partially exotic Media Types such as text/x-comma-separated-values. The bottom-line is: only 10% of the CSV files are served correctly with text/csv. Why do we care, you ask? Well, for example, because the spec says that the header row is optional, but the presence can be flagged by an optional HTTP Header parameter. Just wondering what the chances are ;)

Now, I admit that my sample here is rather small, but I think the distribution will roughly stay the same. By the way, anyone aware of a good way to find CSV files, besides filetype:csv in Google or contains:csv in Bing, as I did it?

We’d be glad to hear from you – do you think this is useful for your application? If yes, why? How would you use it? Or, maybe you want to do a proper CSV crawl to help us with the analysis?


Filed under: Announcement, FYI, Idea, IETF

Syndicated 2011-04-16 12:43:35 from Web of Data

CfP: 2nd International Workshop on RESTful Design, Hyderabad, India

If you’re into RESTful stuff, no matter if you’re a researcher or practitioner, consider submitting a paper to our WWW2011 Workshop on RESTful Design (see the Call for Papers for more details on how to participate).

I’m very happy to see the workshop taking place again this year, after the huge success we had last year and I’m honored to serve on the Program Committee together with people like Jan Algermissen, Mike Amudsen, Joe Gregorio, Stefan Tilkov or Yves Lafon, just to name a few ;)

Hope to see you in India!


Filed under: Announcement

Syndicated 2011-01-06 12:03:13 from Web of Data

2010 in review

The stats helper monkeys at WordPress.com mulled over how this blog did in 2010, and here’s a high level summary of its overall blog health:

Healthy blog!

The Blog-Health-o-Meter™ reads Wow.

Crunchy numbers

Featured image

The average container ship can carry about 4,500 containers. This blog was viewed about 18,000 times in 2010. If each view were a shipping container, your blog would have filled about 4 fully loaded ships.

In 2010, there were 21 new posts, growing the total archive of this blog to 59 posts. There were 6 pictures uploaded, taking up a total of 2mb.

The busiest day of the year was February 12th with 449 views. The most popular post that day was Is Google a large-scale contributor to the LOD cloud?.

Where did they come from?

The top referring sites in 2010 were Google Reader, twitter.com, planetrdf.com, linkeddata.org, and sqlblog.com.

Some visitors came searching, mostly for data life cycle, web of data, sparql, hateos, and morphological analysis.

Attractions in 2010

These are the posts and pages that got the most views in 2010.

1

Is Google a large-scale contributor to the LOD cloud? February 2010
7 comments

2

Oh – it is data on the Web April 2010
26 comments

3

Towards Web-based SPARQL query management and execution April 2010
10 comments

4

Linked Data for Dummies May 2010
6 comments

5

Linked Enterprise Data in a nutshell September 2010
4 comments


Filed under: FYI

Syndicated 2011-01-02 07:44:25 from Web of Data

Processing the LOD cloud with BigQuery

Google’s BigQuery is a large-scale, interactive query environment that can handle billions of records in seconds. Now, wouldn’t it be cool to process the 26+ billion triples from the LOD cloud with BigQuery?

I guess so ;)

So, I did a first step into this direction by setting up the BigQuery for Linked Data project containing:

  • A Python script called nt2csv.py that converts RDF/NTriples into BigQuery-compliant CSV;
  • BigQuery schemes that can be used together with the CSV data from above;
  • Step-by-step instructions how to use nt2csv.py along with Google’s gsutil and bq command line tools to import the above data into Google Storage and issue a query against the uploaded data in BigQuery.

Essentially, one can – given an account for Google Storage as well as an account for BigQuery – do the following:

bq query
"SELECT object FROM [mybucket/tables/rdf/tblNames]
WHERE predicate = 'http://xmlns.com/foaf/0.1/knows'
LIMIT 10"

… which roughly translates into the following SPARQL query:

PREFIX foaf: <http://xmlns.com/foaf/0.1/>
SELECT ?o
WHERE {
?s foaf:knows ?o .
}
LIMIT 10

Currently, I do possess a Google Storage account, but unfortunately not a BigQuery account (yeah, I’ve signed up but still in the queue). So, I can’t really test this stuff – any takers?


Filed under: Experiment, Linked Data

Syndicated 2010-12-13 08:54:05 from Web of Data

40 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!