“There are two important themes behind everything we're delivering today.” says Bret Taylor, head of Facebook’s platform products in the facebook developer blog, about the recent announcements at the f8 conference in San Francisco. Facebook introduced Open Graph protocol, and the Graph API as the next evolution in the Facebook platform.
First, the Web is moving to a model based on the connections between people and all the things they care about. Second, this connections-based Web is well on its way to being built and providing value to both users and developers — the underlying graph of connections just needs to be mapped in a way that makes it easy to use and interoperable.
Facebook introduced three new components of Facebook Platform two of which the Open Graph protocol, and the Graph API. The API provides access to Facebook objects like people, photos, events etc. and the connections between them like friends, tags, shared content etc. via a uniform and consistent URI to access the representation. Every object can be accessed using the the URL https://graph.facebook.com/ID, where ID stands for the unique ID for the object in the social graph. Every connection (CONNECTION_TYPE) that the facebook object supports can be examined using the URL https://graph.facebook.com/ID/CONNECTION_TYPE.
Excerpted from the Graph API page are some examples of URI’s for accessing facebook obects (resources) using their identifiers. At a high level they demonstrate how one would use the API.
All objects in Facebook can be accessed in the same way:
- Pages: https://graph.facebook.com/cocacola (Coca-Cola page)
- Events: https://graph.facebook.com/251906384206 (Facebook Developer Garage Austin)
- Groups: https://graph.facebook.com/2204501798 (Emacs users group)
- Applications: https://graph.facebook.com/2439131959 (the Graffiti app)
[…]
All of the objects in the Facebook social graph are connected to each other via relationships. Bret Taylor is a fan of the Coca-Cola page, and Bret Taylor and Arjun Banker are friends. We call those relationships connections in our API. You can examine the connections between objects using the URL structure
https://graph.facebook.com/ID/CONNECTION_TYPE
. The connections supported for people and pages include:
The URI’s also have a special identifier me
which refers to the current user. The Graph API uses OAuth 2.0 for authorization (the authentication guide has details of the Facebook's OAuth 2.0 implementation).
OAuth 2.0 is a simpler version of OAuth that leverages SSL for API communication instead of relying on complex URL signature schemes and token exchanges. At a high level, using OAuth 2.0 entails getting an access token for a Facebook user via a redirect to Facebook. After you obtain the access token for a user, you can perform authorized requests on behalf of that user by including the access token in your Graph API requests:
Reactions from the web have been positive in general. In a post entitled “Uncomplicated Hypermedia”, Subbu Allamaraju says,
It is a pleasure to read the Facebook Graph API. It avoided many of the traps that web services offered by the other major players in the industry suffer from. Facebook’s API is simple, consistent and inter-connected. It is true to the spirit of the web.
He points to how the protocol and the graph API leverages the power of hypermedia that proves that building simple representations of linked resources; in this case the Facebook object graph; doesn’t have to be that complicated, for e.g. some of the API’s available on the web use separate services for different types of resources while some are based on AtomPub protocol and extensions like GData or OData.
Ars Technica welcomes the announcement as a move towards open standards.
Facebook's move towards open standards and an improved API was driven internally by some key people that have joined the company over the past year. Facebook employs well-known standardista David Recordon, one of the authors of OAuth and Atom Activity Extensions. The FriendFeed engineers who joined Facebook through an acquisition last year have also been instrumental in moving forward Facebook's development platform.
At the O’Riely Radar, David Recordon, now an employee of Facebook, states why these API’s will be good for the open web and highlights the changes he’s excited about; features like the realtime api, support for OAuth 2.0, and no longer needing the 24-hour caching limit for developers. He says,
It's easy as a technologist to think about openness solely in terms of technology, but openness is broader than that. Openness of technology means that others can build using the same tools that you do. Openness of data means that developers can build innovative products based on APIs that weren't previously possible. And openness between people is what happens when when all of these things come together to give people better ways to share information.
According to an article in ReadWriteWeb this could potentially have a huge impact on the semantic web.
One of the most exciting parts of the Facebook announcement to me personally is the possible breakthrough in semanticizing the Web.
For more information on how to leverage the API be sure to check out the developer API documentation.
Community comments
Graph databases
by Emil Eifrem,
Re: Graph databases
by peter lin,
Re: Graph databases
by Bent Rasmussen,
Re: Graph databases
by peter lin,
Not ground breaking
by Yuriy Guskov,
Graph databases
by Emil Eifrem,
Your message is awaiting moderation. Thank you for participating in the discussion.
Well, if they pull this off, it is clearly a transformative moment for the web. And while I'm obviously biased, I believe it will only accelerate the need for graph database backends like Neo4j.
Remember how the social graph used to be all about people to people (1st generation social networks)? Then it expanded to people<->places (4square and others). Now Facebook is taking it to the next level (which they call Open Graph) by including everything from people to places to businesses to products as well as multi-faceted connections between all these things.
Basically, Facebook is representing all their data as a big, highly connected graph. They're exposing this to the world through the Facebook Graph API and they're expecting the world to step up and start annotating the web using the Open Graph Protocol. Finally, they're creating an ecosystem of social plugins that run on top of the Open Graph.
Working with the Open Graph is going to be part of everyday life for the web developer of tomorrow and while the data they process CAN be squeezed into a relational database or a document database, it's just so much easier and more efficient to work with it in its natural graphy form.
Welcome to the era of graph databases!
Cheers,
--
Emil Eifrem
neo4j.org
twitter.com/emileifrem
blogs.neotechnology.com/emil</->
Re: Graph databases
by peter lin,
Your message is awaiting moderation. Thank you for participating in the discussion.
That doesn't make any sense at all. A social graph should be dynamic, not static. Therefore storing graphs is a waste of time, since the graph changes over time and becomes stale rather quickly. Instead, the focus should be discovering interesting things from the graphs. If a system were to try to generate a graph for every account on facebook, that would be a tremendous waste of space and time. Plus, the graph is just like a map of a city. It's not the map that is interesting, it's the places you want to get to.
my bias 2 cents.
Re: Graph databases
by Bent Rasmussen,
Your message is awaiting moderation. Thank you for participating in the discussion.
Three points
- Any subgraph should be mostly (if not completely) static, most updates are not destructive because come along with temporal annotations: e.g. used to be single, now married - both pieces of information is there at the same time and are non-contradictory
- A system shouldn't "try and generate a graph"; the underlying representation should be a graph.
- This doesn't really have anything to do with the Semantic Web as much as a semantic web (capitalization important)
Re: Graph databases
by peter lin,
Your message is awaiting moderation. Thank you for participating in the discussion.
If by subgraph you mean the leaf portions of the graph, I'm not convinced that applies across the board. I have no proof or data to back this up, but I suspect social graphs are less like DAG (en.wikipedia.org/wiki/Directed_acyclic_graph) and more like messy webs. I'm sure there are types of social graphs that are mostly static.
By generate a graph, I mean analyzing the data to produce a graph that captures the relationship so that someone can view or use. The view can and should vary depending on the usage and user. I'm not convinced of the value of storing lots of graphs. The cost versus benefit to me is far from clear.
Not ground breaking
by Yuriy Guskov,
Your message is awaiting moderation. Thank you for participating in the discussion.
In fact, there is Metaweb. There is an attempt to have identification system in Semantic Web based on URIs. Now, there is Facebook graphs. Of course, it is cool, but all these systems miss some points: (a) URIs can't be used for identifying things because it is already used for information resources, but we need to use them for real life things, as well, URIs have no human-friendly form; (b) is Facebook sure it can identify all things? there is billions of people, millions cities, companies, whatever, some of them will never be present at Facebook; (c) such system should be decentralized, because you, me, and everyone else would want OWN identification system, because even "I" refers to different people.
Semantic Web itself is still something which is not available for general public and still remains for some a sort of "academic" thing, which is not practical. Of course, it is not true, or, at least, partly is not true. In general, there are two big problems with it: (a) usability, which means Semantic Web should be more human-friendly, (b) compatibility, which means Semantic Web should gracefully resolve compatibility issues. And there is still no solution.
I try to propose the way of solving these and some other problems. Briefly said, it is the convergence between usability and meaning, which should change not only Semantic Web itself but the way we interact with computers. Think about precise search. Think about exact identification. Think about human-friendly and fine-grained semantics. The way we deal with user interface and even files are not ideal today until the moment we integrate semantics (namely integrate, not just using Semantic Web).
If you are interested, you can find more on this:
on-meaning.blogspot.com/2011/06/great-blunders-...