--- title: 'Season II: 7. How to index and search Yelp data with Solr' author: admin type: post date: 2013-11-04T04:33:00+00:00 url: /hadoop-tutorials-season-ii-7-how-to-index-and-search/ tumblr_gethue_permalink: - http://gethue.tumblr.com/post/65969470780/hadoop-tutorials-season-ii-7-how-to-index-and-search tumblr_gethue_id: - 65969470780 sf_thumbnail_type: - none sf_thumbnail_link_type: - link_to_post sf_detail_type: - none sf_page_title: - 1 sf_page_title_style: - standard sf_no_breadcrumbs: - 1 sf_page_title_bg: - none sf_page_title_text_style: - light sf_background_image_size: - cover sf_social_sharing: - 1 sf_sidebar_config: - left-sidebar sf_left_sidebar: - Sidebar-2 sf_right_sidebar: - Sidebar-1 sf_caption_position: - caption-right categories: - Tutorial ---

In the previous episode we saw how to use Pig and Hive with HBase. This time, let’s see how to make our Yelp data searchable by indexing it and building a customizable UI with the Hue Search app.

  {{< youtube ATldKiiJdqY >}}   # Indexing data into Solr   This tutorial is based on [SolrCloud][1]. Here is a step by step [guide][2] about its installation and a list of required [packages][2]: - solr-server - solr-mapreduce - search   Next step is about deploying and configuring Solr Cloud. We are following the [documentation][3].   After this, we [create][4] a new collection and index named ‘reviews’. We use our predefined schema that needs to be copied from the [Hadoop tutorial github][5].  
cp solr_local/conf/schema.xml solr_configs/conf/schema.xml

solrctl instancedir --create reviews solr_local

solrctl collection --create reviews -s 1
We replace the field definitions in the [schema][6] with a mapping corresponding to our Yelp data. The schema represents each data fields that will be available in the search index. You can read more about schema.xml in the [Solr wiki][7].
 <field name="business_id" type="text_en" indexed="true" stored="true" />  
  <field name="cool" type="tint" indexed="true" stored="true" />
  <field name="date" type="text_en" indexed="true" stored="true" />
  <field name="funny" type="tint" indexed="true" stored="true" />
  <field name="id" type="string" indexed="true" stored="true" required="true" multiValued="false" />  
  <field name="stars" type="tint" indexed="true" stored="true" />
  <field name="text" type="text_en" indexed="true" stored="true" />
  <field name="type" type="text_en" indexed="true" stored="true" />         
  <field name="useful" type="tint" indexed="true" stored="true" />
  <field name="user_id" type="text_en" indexed="true" stored="true" />
  <field name="name" type="text_en" indexed="true" stored="true" />
  <field name="full_address" type="text_en" indexed="true" stored="true" />
  <field name="latitude" type="tfloat" indexed="true" stored="true" />
  <field name="longitude" type="tfloat" indexed="true" stored="true" />
  <field name="neighborhoods" type="text_en" indexed="true" stored="true" />
  <field name="open" type="text_en" indexed="true" stored="true" />
  <field name="review_count" type="tint" indexed="true" stored="true" />
  <field name="state" type="text_en" indexed="true" stored="true" />
Then, we retrieve and clean a subset of our Yelp data with a [Hive query][8], download it as CSV and index it with the [indexer tool][9] and this [command][10]:
hadoop jar /usr/lib/solr/contrib/mr/search-mr-*-job.jar org.apache.solr.hadoop.MapReduceIndexerTool -D 'mapred.child.java.opts=-Xmx500m' --log4j /usr/share/doc/search*/examples/solr-nrt/log4j.properties --morphline-file solr_local/reviews.conf --output-dir hdfs://localhost:8020/tmp/load --verbose --go-live --zk-host localhost:2181/solr --collection reviews hdfs://localhost:8020/tmp/query_result.csv
The command will use our [morphline file][11] to map the Yelp data to the fields defined in our index schema.xml. While debugging morphline, the —dry-run option will save you some time.   # Customize the search result The administration panel lets you tweak the look & feel and features of the search page. This is explained in the second part of the video.   # Conclusion Cloudera Search is great for opening your user base to Hadoop and do quick data retrieval. Some other articles describes greatly some user use cases, like [email][12] or [customer data][13] search. Cloudera Morphline is also an interesting tool for facilitating the indexing of your data. You can learn more about it on its [project website][14]. As usual feel free to comment on the [hue-user][15] list or [@gethue][16]!   # Troubleshooting 1. If you see this error: org.apache.solr.client.solrj.impl.HttpSolrServer\$RemoteSolrException:Error CREATEing SolrCore ‘reviews_shard1_replica1’: Unable to create core: reviews_shard1_replica1 Caused by: Could not find configName for collection reviews found:null You might have forgotten to create the collection:
solrctl instancedir --create review solr_configs

2. If you see this error:
ERROR - 2013-10-10 20:01:21.383; org.apache.solr.servlet.SolrDispatchFilter; Could not start Solr. Check solr/home property and the logs
ERROR - 2013-10-10 20:01:21.409; org.apache.solr.common.SolrException; null:org.apache.solr.common.SolrException: solr.xml not found in ZooKeeper
       at org.apache.solr.core.ConfigSolr.fromSolrHome(ConfigSolr.java:109)
Server is shutting down
You might need to force Solr to reload the configuration. Beware, this might break ZooKeeper and you might need to read error #3.   3. If you see this error:
KeeperErrorCode = NoNode for /overseer/collection-queue-work</str>
<str name="trace">
org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = NoNode for /overseer/collection-queue-work
  It probably comes from error #2. You might need to re-upload the config and recreate the collection. [1]: http://wiki.apache.org/solr/SolrCloud [2]: http://www.cloudera.com/content/cloudera-content/cloudera-docs/Search/latest/Cloudera-Search-Installation-Guide/csig_install_search.html [3]: http://www.cloudera.com/content/cloudera-content/cloudera-docs/Search/latest/Cloudera-Search-Installation-Guide/csig_deploy_search_solrcloud.html [4]: http://www.cloudera.com/content/cloudera-content/cloudera-docs/Search/latest/Cloudera-Search-Installation-Guide/csig_runtime_solr_config.html [5]: https://github.com/romainr/hadoop-tutorials-examples/tree/master/solr-local-search [6]: https://github.com/romainr/hadoop-tutorials-examples/blob/master/solr-local-search/solr_local/conf/schema.xml#L109 [7]: http://wiki.apache.org/solr/SchemaXml [8]: https://github.com/romainr/hadoop-tutorials-examples/blob/master/solr-local-search/data_subset.sql [9]: http://www.cloudera.com/content/cloudera-content/cloudera-docs/Search/latest/Cloudera-Search-User-Guide/csug_batch_index_to_solr_servers_using_golive.html [10]: https://github.com/romainr/hadoop-tutorials-examples/blob/master/solr-local-search/load_index.sh [11]: https://github.com/romainr/hadoop-tutorials-examples/blob/master/solr-local-search/solr_local/reviews.conf [12]: http://blog.cloudera.com/blog/2013/09/email-indexing-using-cloudera-search/ [13]: http://blog.cloudera.com/blog/2013/09/secrets-of-cloudera-support-impala-and-search-make-the-customer-experience-even-better/ [14]: http://cloudera.github.io/cdk/docs/current/cdk-morphlines/index.html [15]: http://groups.google.com/a/cloudera.org/group/hue-user [16]: https://twitter.com/gethue