I was analyzing, dreaming, monitoring, crawling, debugging, reading, breathing, cursing, scaling, visualizing and learning the social graph for the last couple of months and I thought it might be a good idea to write a little something about The Social Graph Challenge with a pragmatic twist on few other common concepts.
——— Blitz Introduction to The Social Graph ———
The social graph is just a simplified mathematic abstraction when nodes are people and edges are relations between them.
In the last decade the internet have became more social than was ever expected it to be with the rapid growth and adaptation of social networks, social media and user-generated contributions and interactions.
Nowadays, there is a growing feeling that it is feasible to model and map the social web into a real-life social graph replication.
——— Pragmatic Overview on The Social Graph Challenge ———
To better understand how complicated it is to create a vocabulary for expressing metadata about people, their interests, relationships and activities you should simply pay a quick visit to the FOAF Project technical specification page
The FOAF (“Friend of a Friend”) Project has the most comprehensive model available today and it is still lacking some basic modeling granularity e.g. time awareness metadata, no privacy model, poor relationship model
*** The Social Cloud
It is common mistake to forget that people are more than just flat internet identities (e.g. Linked profile) and to complete the profile modeling we must add all their content to the graph e.g. Personal Blog, Flickr images, YouTube Videos, Delicious bookmarks, Tweets, Blog Comments etc.
Modeling all these content and consumption types will yield a broader definition (a.k.a. The Social Cloud) with even more complex modeling challenges.
*** The Paradigm Shift
While conventional internet crawlers, follow hyperlinks within web pages and treat pages as plain-text, social crawlers should have social-“awareness”:
- Identify and extract people identities fragments (e.g. social network profiles, blog authors)
- Identify relationships (e.g. social networks connections, blog-roll fans)
- Identify relations between content and people (author, bookmark, reference etc.)
*** The Standards Dilemma – No Silver Bullet
Beside FOAF, there are several open standard like RSS, ATOM for content syndication and microformats like HCard, XFN for profiles and network discovery, that seems promising and can help with the identification quest but although this is being pushed by giants (e.g. Google Social Graph API) the adaptation is still low and have many correctness and corruptions issues – e.g. all these people claimed to be WordPress.com using the XFN (rel=”me”) microformat
*** The Promise of Structured Sources (a.k.a. The structure myth)
The Myth: Most social Media sites (e.g. FaceBook, LinkedIn, MySpace, Flickr etc.) have a public available structured profile pages so in principle all need to be done is some XPath magic on HTML DOM to finish the parsing task.
But… Most of the work isn’t parsing but data modeling which require deep understanding of each site user model and usage
- Many Social Media sites have EULA restrictions which prohibit any access or use to the site content but if you are lucky you will get some offical API’s instead.
- Social Media sites have many (~weekly) structural changes in their CSS/HTML.
*** Few more Challenges with Social Crawling:
- Privacy-Ownership-Control – The data is the property of the users
- Unstructured Sources – It isn’t a trivial task to extract social entities from unstructured sources (e.g. blogs) and might require offline semantic processing on your collected data.
- Cross Network Relations – How to find those important hidden cross network relations e.g. between the biggest reliable network graph (e.g. FaceBook) and the richest content contributions (e.g. Blogosphere, YouTube, Flickr etc.)
- Identify Social Signs (e.g. Social Widgets, Comments, Blogroll etc.)
- Social Graph Update Mechanism and crawlers distribution
- Profiles Canonization
*** The Identity Crisis
- Filtering Impersonation e.g. all these site use XFN (rel=”me”) to “say” they are TechCrunch
- Identify and have different modeling for non-individual identities (groups, shared authorship) e.g. Knitters Blog with 629 knitting contributors :)
- Strive to merge identities (a.k.a. profile fusion) when possible e.g. Moti Karmona in LinkedIn and Moti Karmona in FaceBook could be two instances (/profiles) of the same person and merging this profiles will enable:
- Cross network connectedness => Bridging between network richness (e.g. FaceBook) to content richness (e.g. Blogosphere)
- Richer people representation using identities aggregation => Richer networks
- The Fusion Challenge: You can pay a short visit to the nearest social aggregator directory but you can’t get away from some more complex algorithms for disambiguating web appearances of people with more common names like James Smith who doesn’t “play” in the social aggregation playground (like 98.7% of the graph).
*** Graph Enrichment
- Implicit Relations – Enrich the network with “implicit” relationships (Colleagues, Graduates, Neighbors) e.g. I have a LinkedIn profile and all my connections are hidden for public crawlers but the fact I work in Delver is public so if Delver is startup company with less than ~50 people than there is a good chance I know all the other workers in Delver => This simple heuristic rule can create an implicit relation between me and other workers of Delver without me explicitly claim that I know them (as I did in FaceBook)
- Generating the inverted relations when needed Followed vs. Follower
- Deeper, semantic extraction of social entities un-structured content
Let’s have some quick (and very dirty) guesstimates:
World Population is approx. ~6.7 Billion / 22% Internet penetration => 1.5 Billion internet users
Let’s say 65% of these users have some kind of presence in Social Media (~20% have more than one) => ~1 Billion Profiles x ~10 content items per profile
+ 1 Billion Profiles Nodes x ~100 network relations per profile => ~110 Billion Graph Edges + ~10 Billion Graph Nodes
It is highly depended on graph implementation but with this numbers, you can easily find yourself with ~1-2 Terabytes of graph metadata alone (without contents and profiles*)
Updating and querying gigantic, dynamic, distributed, directed, cyclic, colored, weighted graph have “some” algorithmic, computational complexity – a little more complex than a blog post could cover…;-)
You can take a quick look at the tiny 15 Giga, 25 million nodes graph implementation in LinkedIn to get a glimpse to the technological challenge …
* Note: Indexing content and profiles data (e.g. for Building a Social Search Engine) is an architecture challenge equivalent to any modern search engine with ~10 Billion documents index
This is only the tip of the iceberg but it is more than enough for one blog post ;)
Credit: All the images were taken from Tamar Hak‘s amazing artwork – creating The Delver Kid image.