{ "metadata": { "name": "Chapter1" }, "nbformat": 3, "nbformat_minor": 0, "worksheets": [ { "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "#Mining the Social Web, 1st Edition - Hacking on Twitter Data (Chapter 1)\n", "\n", "If you only have 10 seconds...\n", "\n", "Twitter's new API will prevent you from running much of the code from _Mining the Social Web_, and this IPython Notebook shows you how to roll with the changes and adapt as painlessly as possible until an updated printing is available. In particular, it shows you how to authenticate before executing any API requests and demonstrates how to use the new search API.\n", "\n", "If you have a couple of minutes...\n", "\n", "Twitter is officially retiring v1.0 of their API as of March 2013 with v1.1 of the API being the new status quo. There are a few fundamental differences that social web miners that should consider (see Twitter's blog at https://dev.twitter.com/blog/changes-coming-to-twitter-api and https://dev.twitter.com/docs/api/1.1/overview) with the two changes that are most likely to affect an existing workflow being that authentication is now mandatory for *all* requests, rate-limiting being on a per resource basis (as opposed to an overall rate limit based on a fixed number of requests per unit time), various platform objects changing (for the better), and search semantics changing to a \"pageless\" approach. All in all, the v1.1 API looks much cleaner and more consistent, and it should be a good thing longer-term although it may cause interim pains for folks migrating to it.\n", "\n", "The latest printing of Mining the Social Web (2012-02-22, Third release) reflects v1.0 of the API, and this document is intended to provide readers with updated information and examples that specifically related to Twitter requests from Chapter 1 of the book until a new printing provides updates.\n", "\n", "I've tried to add some filler in between the examples below so that they flow and are easy to follow along; however, they'll make a lot more sense to you if you are following along with the text. The great news is that you can download the sample chapter that corresponds to this IPython Notebook at http://shop.oreilly.com/product/0636920010203.do free of charge!\n", "\n", "IPython Notebooks are also available for Chapters 4 and 5. As a reader of my book, I want you to know that I'm committed to helping you in any way that I can, so please reach out on Facebook at https://www.facebook.com/MiningTheSocialWeb or on Twitter at http://twitter.com/SocialWebMining if you have any questions or concerns in the meanwhile. I'd also love your feedback on whether or not you think that IPython Notebook is a good tool for tinkering with the source code for the book, because I'm strongly considering it as a supplement for each chapter.\n", "\n", "Regards - Matthew A. Russell" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Examples Adapted From The Book\n", "\n", "Twitter implements OAuth 1.0A as its standard authentication mechanism, and in order to use it to make requests to Twitter's API, you'll need to go to https://dev.twitter.com/apps and create a sample application. There are four primary identifiers you'll need to note for an OAuth 1.0A workflow: consumer key, consumer secret, access token, and access token secret. Note that you will need an ordinary Twitter account in order to login, create an app, and get these credentials.\n", "\n", "" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The examples in this chapter take advantage of the excellent `twitter` package (See https://github.com/sixohsix/twitter), which you can install in a terminal with `easy_install` or `pip` as follows:" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "```$ easy_install twitter\n", "$ pip install twitter```" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Once installed, you should be able to open up a Python interpreter (or better yet, your IPython interpreter) and get rolling. See http://ipython.org/ for more information on IPython if you're not already using it or viewing this file in IPython Notebook." ] }, { "cell_type": "code", "collapsed": false, "input": [ "import twitter\n", "\n", "# Go to http://twitter.com/apps/new to create an app and get these items\n", "# See https://dev.twitter.com/docs/auth/oauth for more information on Twitter's OAuth implementation\n", "\n", "CONSUMER_KEY = ''\n", "CONSUMER_SECRET = ''\n", "OAUTH_TOKEN = ''\n", "OAUTH_TOKEN_SECRET = ''\n", "\n", "auth = twitter.oauth.OAuth(OAUTH_TOKEN, OAUTH_TOKEN_SECRET,\n", " CONSUMER_KEY, CONSUMER_SECRET)\n", "\n", "twitter_api = twitter.Twitter(domain='api.twitter.com', \n", " api_version='1.1',\n", " auth=auth\n", " )" ], "language": "python", "metadata": {}, "outputs": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Example 1-3. Retrieving Twitter trends**" ] }, { "cell_type": "code", "collapsed": false, "input": [ "# With an authenticated twitter_api in existence, you can now use it to query Twitter resources as usual.\n", "# However, the trends resource is cleaned up a bit in v1.1, so requests are a bit simpler than in the latest\n", "# printing. See https://dev.twitter.com/docs/api/1.1/get/trends/place\n", "\n", "# The Yahoo! Where On Earth ID for the entire world is 1\n", "WORLD_WOE_ID = 1 \n", "\n", "# Prefix id with the underscore for query string parameterization.\n", "# Without the underscore, it's appended to the URL itself\n", "world_trends = twitter_api.trends.place(_id=WORLD_WOE_ID)\n", "print world_trends" ], "language": "python", "metadata": {}, "outputs": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "IPython Notebook didn't display the output because the result of the API call was captured as a variable. Here's how you could print a readable version of the response." ] }, { "cell_type": "code", "collapsed": false, "input": [ "import json\n", "print json.dumps(world_trends, indent=1)" ], "language": "python", "metadata": {}, "outputs": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now that you're authenticated and understand a bit more about making requests with Twitter's new API, let's look at some examples that involve search requests, which are a bit different with v1.1 of the API." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Example 1-4. Paging through Twitter search results**" ] }, { "cell_type": "code", "collapsed": false, "input": [ "# Like all other APIs, search requests now require authentication and have a slightly different request and\n", "# response format. See https://dev.twitter.com/docs/api/1.1/get/search/tweets\n", "\n", "q = \"SNL\"\n", "count = 100\n", "\n", "search_results = twitter_api.search.tweets(q=q, count=count)\n", "statuses = search_results['statuses']\n", "\n", "# v1.1 of Twitter's API provides a value in the response for the next batch of results that needs to be parsed out\n", "# and passed back in as keyword args if you want to retrieve more than one page. It appears in the 'search_metadata'\n", "# field of the response object and has the following form:'?max_id=313519052523986943&q=NCAA&include_entities=1'\n", "# The tweets themselves are encoded in the 'statuses' field of the response\n", "\n", "\n", "# Here's how you would grab five more batches of results and collect the statuses as a list\n", "for _ in range(5): \n", " try:\n", " next_results = search_results['search_metadata']['next_results']\n", " except KeyError, e: # No more results when next_results doesn't exist\n", " break\n", "\n", " kwargs = dict([ kv.split('=') for kv in next_results[1:].split(\"&\") ]) # Create a dictionary from the query string params\n", " search_results = twitter_api.search.tweets(**kwargs)\n", " statuses += search_results['statuses']" ], "language": "python", "metadata": {}, "outputs": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Just like before, here's how you could print out the results of the statuses, though beware that there are many of them. (Note that ipython pretty prints Python dictionary objects automatically in your terminal-based session.)" ] }, { "cell_type": "code", "collapsed": false, "input": [ "import json\n", "print json.dumps(statuses[0:2], indent=1) # Only print a couple of tweets here in IPython Notebook" ], "language": "python", "metadata": {}, "outputs": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Example 1-5. Pretty-printing Twitter data as JSON**" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Python's list comprehensions are very useful indeed. Here's how to use a list comprehension to extract the text from the rather bloated status objects." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Example 1-6. A simple list comprehension in Python**" ] }, { "cell_type": "code", "collapsed": false, "input": [ "tweets = [ status['text'] for status in statuses ]\n", "\n", "print tweets[0]" ], "language": "python", "metadata": {}, "outputs": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's run a few basic calculations on the data and compute its lexical diversity." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Example 1-7. Calculating lexical diversity for tweets**" ] }, { "cell_type": "code", "collapsed": false, "input": [ "words = []\n", "for t in tweets:\n", " words += [ w for w in t.split() ]\n", "\n", "# total words\n", "print len(words) \n", "\n", "# unique words\n", "print len(set(words)) \n", "\n", "# lexical diversity\n", "print 1.0*len(set(words))/len(words) \n", "\n", "# avg words per tweet\n", "print 1.0*sum([ len(t.split()) for t in tweets ])/len(tweets) " ], "language": "python", "metadata": {}, "outputs": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "If you've already installed the ```nltk``` package, you can use it to compute a frequency distribution. (Although you _may_ receive a warning or error message about PyYaml during installation for certain environments, it is probably the case that you can safely ignore this error.) Recall that to install a package, you can use ```easy_install``` or ```pip``` as follows:\n", "\n", "```\n", "$ easy_install nltk\n", "$ pip install nltk\n", "```" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Example 1-9. Using NLTK to perform basic frequency analysis**" ] }, { "cell_type": "code", "collapsed": false, "input": [ "import nltk\n", "\n", "freq_dist = nltk.FreqDist(words)\n", "print freq_dist.keys()[:50] # 50 most frequent tokens\n", "print freq_dist.keys()[-50:] # 50 least frequent tokens" ], "language": "python", "metadata": {}, "outputs": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In case you're running Python 2.7, note that you can use the new ```collections.Counter``` class to effectively do the same thing." ] }, { "cell_type": "code", "collapsed": false, "input": [ "from collections import Counter\n", "\n", "counter = Counter(words)\n", "print counter.most_common(50)\n", "print counter.most_common()[-50:] # doesn't have a least_common method, so get them all and slice" ], "language": "python", "metadata": {}, "outputs": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Although there are some additional ways to find retweets based on Twitter's evolved API (see https://dev.twitter.com/docs/platform-objects/tweets for some ideas), one way that operates on the text of a tweet itself is in parsing out features of the text that provide clues that the status is a retweet. Regular expressions are a good approach for this problem." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "***Example 1-10. Using regular expressions to find retweets***" ] }, { "cell_type": "code", "collapsed": false, "input": [ "import re\n", "rt_patterns = re.compile(r\"(RT|via)((?:\\b\\W*@\\w+)+)\", re.IGNORECASE)\n", "example_tweets = [\"RT @SocialWebMining Justin Bieber is on SNL 2nite. w00t?!?\", \n", " \"Justin Bieber is on SNL 2nite. w00t?!? (via @SocialWebMining)\"]\n", "for t in example_tweets:\n", " print rt_patterns.findall(t)" ], "language": "python", "metadata": {}, "outputs": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The possibilities with what you could do with Twitter data is endless. Here's some sample code that gets you started in building a graph of who retweeted whom that you could use as a starting point for further exploration." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "***Example 1-11. Building and analyzing a graph describing who retweeted whom***" ] }, { "cell_type": "code", "collapsed": false, "input": [ "import networkx as nx\n", "import re\n", "g = nx.DiGraph()\n", "\n", "def get_rt_sources(tweet):\n", " rt_patterns = re.compile(r'(RT|via)((?:\\b\\W*@\\w+)+)', re.IGNORECASE)\n", " return [ source.strip()\n", " for tuple in rt_patterns.findall(tweet)\n", " for source in tuple\n", " if source not in (\"RT\", \"via\") ]\n", "\n", "for status in statuses:\n", " rt_sources = get_rt_sources(status['text'])\n", " if not rt_sources: continue\n", " for rt_source in rt_sources:\n", " g.add_edge(rt_source, status['user']['screen_name'], {'tweet_id' : status['id']})\n", " \n", "print nx.info(g)\n", "print g.edges(data=True)[0]\n", "print len(nx.connected_components(g.to_undirected()))\n", "print sorted(nx.degree(g).values())" ], "language": "python", "metadata": {}, "outputs": [] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In the _Mining the Social Web_'s GitHub repository, there a thorough sample file that takes all of the concepts that have been presented and weaves them into a more comprehensive turn-key example and visualization. Visit https://github.com/ptwobrussell/Mining-the-Social-Web and, in particular, run the sample file located at https://github.com/ptwobrussell/Mining-the-Social-Web/blob/master/python_code/introduction__retweet_visualization.py which should automatically launch your web browser and provide you with an interactive visualization similar to the following:\n", "\n", "" ] } ], "metadata": {} } ] }