druid中 如何從本地批(batch)導入與從hdfs 批導入數據 實戰
使用indexing service 批導入數據,如何配置task文件,指定從本地 和 hdfs中導入數據。很多在手冊中沒有詳細說明,配置起來存在困難。
先搭建幾個節點:coordinator、historical、overlord、middleManager。並且啟動服務。
前提:需要準備好mysql(http://my.oschina.net/u/2460844/blog/637334 該文中說明了mysql的配置)、hdfs集群、zookeeper(單機版就可以)
1. __common 配置:
[html] view plain copy
[html] view plain copy
- druid.extensions.loadList=["mysql-metadata-storage","druid-hdfs-storage"]
- ruid.startup.logging.logProperties=true
- druid.zk.service.host=10.70.27.8:2181,10.70.27.10:2181,10.70.27.12:2181
- druid.zk.paths.base=/druid
- druid.metadata.storage.type=mysql
- druid.metadata.storage.connector.connectURI=jdbc:mysql://10.70.27.12:3306/druid
- druid.metadata.storage.connector.user=fool
- druid.metadata.storage.connector.password=fool
- druid.storage.type=hdfs
- druid.storage.storageDirectory=hdfs://10.70.27.3:9000/data/druid/segments
- druid.indexer.logs.type=hdfs
- druid.indexer.logs.directory=/data/druid/indexing-logs
- druid.monitoring.monitors=["io.druid.java.util.metrics.JvmMonitor"]
- druid.emitter=logging
- druid.emitter.logging.logLevel=info
- druid.indexing.doubleStorage=double
2. coordinator 配置:
[html] view plain copy
- druid.host=druid01
- druid.port=8081
- druid.service=coordinator
- druid.coordinator.startDelay=PT5M
3. historical 配置:
[html] view plain copy
- druid.host=druid02
- druid.port=8082
- druid.service=druid/historical
- druid.historical.cache.useCache=true
- druid.historical.cache.populateCache=true
- druid.processing.buffer.sizeBytes=100000000
- druid.processing.numThreads=3
- druid.server.http.numThreads=5
- druid.server.maxSize=300000000000
- druid.segmentCache.locations=[{"path": " /tmp/druid/indexCache", "maxSize": 300000000000}]
- druid.monitoring.monitors=["io.druid.server.metrics.HistoricalMetricsMonitor", "com.metamx.metrics.JvmMonitor"]
4. overlord 配置:
[html] view plain copy
- druid.host=druid03
- druid.port=8090
- druid.service=overlord
- druid.indexer.autoscale.doAutoscale=true
- druid.indexer.autoscale.strategy=ec2
- druid.indexer.autoscale.workerIdleTimeout=PT90m
- druid.indexer.autoscale.terminatePeriod=PT5M
- druid.indexer.autoscale.workerVersion=0
- druid.indexer.logs.type=local
- druid.indexer.logs.directory=/tmp/druid/indexlog
- druid.indexer.runner.type=remote
- druid.indexer.runner.minWorkerVersion=0
- # Store all task state in the metadata storage
- druid.indexer.storage.type=metadata
- #druid.indexer.fork.property.druid.processing.numThreads=1
- #druid.indexer.fork.property.druid.computation.buffer.size=100000000
- druid.indexer.runner.type=remote
5. middleManager 配置:
[html] view plain copy
- druid.host=druid04
- druid.port=8091
- druid.service=druid/middlemanager
- druid.indexer.logs.type=local
- druid.indexer.logs.directory=/tmp/druid/indexlog
- druid.indexer.fork.property.druid.processing.numThreads=5
- druid.indexer.fork.property.druid.computation.buffer.size=100000000
- # Resources for peons
- druid.indexer.runner.javaOpts=-server -Xmx3g
- druid.indexer.task.baseTaskDir=/tmp/persistent/task/
6. 分別啟動各個節點,如果出現了啟動問題,很能是因為內存問題,可適當調整java運行參數。
7. 需要導入的數據 wikipedia_data.csv , wikipedia_data.json
---wikipedia_data.json:
[html] view plain copy
- {"timestamp": "2013-08-31T01:02:33Z", "page": "Gypsy Danger", "language" : "en", "user" : "nuclear", "unpatrolled" : "true", "newPage" : "true", "robot": "false", "anonymous": "false", "namespace":"article", "continent":"North America", "country":"United States", "region":"Bay Area", "city":"San Francisco", "added": 57, "deleted": 200, "delta": -143}
- {"timestamp": "2013-08-31T03:32:45Z", "page": "Striker Eureka", "language" : "en", "user" : "speed", "unpatrolled" : "false", "newPage" : "true", "robot": "true", "anonymous": "false", "namespace":"wikipedia", "continent":"Australia", "country":"Australia", "region":"Cantebury", "city":"Syndey", "added": 459, "deleted": 129, "delta": 330}
- {"timestamp": "2013-08-31T07:11:21Z", "page": "Cherno Alpha", "language" : "ru", "user" : "masterYi", "unpatrolled" : "false", "newPage" : "true", "robot": "true", "anonymous": "false", "namespace":"article", "continent":"Asia", "country":"Russia", "region":"Oblast", "city":"Moscow", "added": 123, "deleted": 12, "delta": 111}
- {"timestamp": "2013-08-31T11:58:39Z", "page": "Crimson Typhoon", "language" : "zh", "user" : "triplets", "unpatrolled" : "true", "newPage" : "false", "robot": "true", "anonymous": "false", "namespace":"wikipedia", "continent":"Asia", "country":"China", "region":"Shanxi", "city":"Taiyuan", "added": 905, "deleted": 5, "delta": 900}
- {"timestamp": "2013-08-31T12:41:27Z", "page": "Coyote Tango", "language" : "ja", "user" : "cancer", "unpatrolled" : "true", "newPage" : "false", "robot": "true", "anonymous": "false", "namespace":"wikipedia", "continent":"Asia", "country":"Japan", "region":"Kanto", "city":"Tokyo", "added": 1, "deleted": 10, "delta": -9}
---wikipedia_data.csv:
[html] view plain copy
- 31T01:02:33Z, Gypsy Danger, en, nuclear, true, true, false, false, article, North America, United States, Bay Area, San Francisco, 57, 200, -143
- 2013-08-31T01:02:33Z, Gypsy Danger, en, nuclear, true, true, false, false, article, North America, United States, Bay Area, San Francisc, 57, 200, -143
- 2013-08-31T01:02:33Z, Gypsy Danger, en, nuclear, true, true, false, false, article, North America, United States, Bay Area, San Francis, 57, 200, -143
- 2013-08-31T01:02:33Z, Gypsy Danger, en, nuclear, true, true, false, false, article, North America, United States, Bay Area, San Franci, 57, 200, -143
- 2013-08-31T01:02:33Z, Gypsy Danger, en, nuclear, true, true, false, false, article, North America, United States, Bay Area, San Franc, 57, 200, -143
- 2013-08-31T01:02:33Z, Gypsy Danger, en, nuclear, true, true, false, false, article, North America, United States, Bay Area, San Fran, 57, 200, -143
- 2013-08-31T01:02:33Z, Gypsy Danger, en, nuclear, true, true, false, false, article, North America, United States, Bay Area, San Fra, 57, 200, -143
- 2013-08-31T01:02:33Z, Gypsy Danger, en, nuclear, true, true, false, false, article, North America, United States, Bay Area, San Fr, 57, 200, -143
- 2013-08-31T01:02:33Z, Gypsy Danger, en, nuclear, true, true, false, false, article, North America, United States, Bay Area, San F, 57, 200, -143
- 2013-08-31T01:02:33Z, Gypsy Danger, en, nuclear, true, true, false, false, article, North America, United States, Bay Area, Sa , 57, 200, -143
注意 這裡導入的數據 如果保存在本機磁碟導入時,數據文件必須保存在middleManager節點上,
不然提交task後無法找到文件。
如果是從hdfs中導入,只需要先put到hdfs文件系統中。這裡的overlord 節點是druid03(你可以換成ip)。
- 8 從本地倒入數據到druid
在任意一個節點上(保證這個節點能夠訪問druid03), 創建一個json的index task任務.
8.1 導入一個 本地local保存的json格式的文件,這個task的json如下所示:
8.1.1 先將數據 wikipedia_data.json 保存在middleManager節點的druid的文件夾下(比如/root/druid-0.8.3)。
vi wikipedia_index_local_json_task.json , 內容如下。
[html] view plain copy
- {
- "type" : "index_hadoop",
- "spec" : {
- "dataSchema" : {
- "dataSource" : "wikipedia",
- "parser" : {
- "type" : "string",
- "parseSpec" : {
- "format" : "json",
- "timestampSpec" : {
- "column" : "timestamp",
- "format" : "auto"
- },
- "dimensionsSpec" : {
- "dimensions": ["page","language","user","unpatrolled","newPage","robot","anonymous","namespace","continent","country","region","city"],
- "dimensionExclusions" : [],
- "spatialDimensions" : []
- }
- }
- },
- "metricsSpec" : [
- {
- "type" : "count",
- "name" : "count"
- },
- {
- "type" : "doubleSum",
- "name" : "added",
- "fieldName" : "added"
- },
- {
- "type" : "doubleSum",
- "name" : "deleted",
- "fieldName" : "deleted"
- },
- {
- "type" : "doubleSum",
- "name" : "delta",
- "fieldName" : "delta"
- }
- ],
- "granularitySpec" : {
- "type" : "uniform",
- "segmentGranularity" : "DAY",
- "queryGranularity" : "NONE",
- "intervals" : [ "2013-08-31/2013-09-01" ]
- }
- },
- "ioConfig": {
- "type": "index",
- "firehose": {
- "type": "local",
- "baseDir": "./",
- "filter": "wikipedia_data.json"
- }
- },
- "tuningConfig": {
- "type": "index",
- "targetPartitionSize": 0,
- "rowFlushBoundary": 0
- }
- }
- }
8.1.2. 提交任務,前面已經說過了overlord節點在druid03主機上,所以得向 druid03 主機提交任務,命令如下:
# curl -X "POST" -H "Content-Type:application/json" -d @wikipedia_index_local_json_task.json druid03:8090/druid/indexer/v1/task
在overlord節點的日誌上可以看出任務的情況,當出現如下信息表示任務成功
[html] view plain copy
- 2016-03-29T17:35:11,385 INFO [forking-task-runner-1] io.druid.indexing.overlord.ForkingTaskRunner - Logging task index_hadoop_NN_2016-03-29T17:35:11.510+08:00 output to: /tmp/persistent/task/index_hadoop_NN_2016-03-29T17:35:11.510+08:00/log
- 2016-03-29T17:42:15,263 INFO [forking-task-runner-1] io.druid.indexing.overlord.ForkingTaskRunner - Process exited with status[0] for task: index_hadoop_NN_2016-03-29T17:35:11.510+08:00
- 2016-03-29T17:42:15,265 INFO [forking-task-runner-1] io.druid.indexing.common.tasklogs.FileTaskLogs - Wrote task log to: /tmp/druid/indexlog/index_hadoop_NN_2016-03-29T17:35:11.510+08:00.log
- 2016-03-29T17:42:15,267 INFO [forking-task-runner-1] io.druid.indexing.overlord.ForkingTaskRunner - Removing task directory: /tmp/persistent/task/index_hadoop_NN_2016-03-29T17:35:11.510+08:00
- 2016-03-29T17:42:15,284 INFO [WorkerTaskMonitor-1] io.druid.indexing.worker.WorkerTaskMonitor - Job"s finished. Completed [index_hadoop_NN_2016-03-29T17:35:11.510+08:00] with status [SUCCESS
8.2 本地導入csv格式數據的 task文件示例。
wikipedia_data.csv 需要先保存在middleManager節點的druid目錄下(比如/root/druid-0.8.3)。
8.2.1 task文件 wikipedia_index_local_csv_task.json 內容如下:
[html] view plain copy
- {
- "type": "index",
- "spec": {
- "dataSchema": {
- "dataSource": "wikipedia",
- "parser": {
- "type": "string",
- "parseSpec":
- {
- "format" : "csv",
- "timestampSpec" :
- {
- "column" : "timestamp"
- },
- "columns" : ["timestamp","page","language","user","unpatrolled","newPage","robot","anonymous","namespace","continent","country","region","city","added","deleted","delta"],
- "dimensionsSpec" :
- {
- "dimensions" : ["page","language","user","unpatrolled","newPage","robot","anonymous","namespace","continent","country","region","city"]
- }
- }
- },
- "metricsSpec": [
- {
- "type": "count",
- "name": "count"
- },
- {
- "type": "doubleSum",
- "name": "added",
- "fieldName": "added"
- },
- {
- "type": "doubleSum",
- "name": "deleted",
- "fieldName": "deleted"
- },
- {
- "type": "doubleSum",
- "name": "delta",
- "fieldName": "delta"
- }
- ],
- "granularitySpec": {
- "type": "uniform",
- "segmentGranularity": "DAY",
- "queryGranularity": "NONE",
- "intervals": ["2013-08-31/2013-09-01"]
- }
- },
- "ioConfig": {
- "type": "index",
- "firehose": {
- "type": "local",
- "baseDir": "./",
- "filter": "wikipedia_data.csv"
- }
- },
- "tuningConfig": {
- "type": "index",
- "targetPartitionSize": 0,
- "rowFlushBoundary": 0
- }
- }
- }
8.2.2. 提交任務,前面已經說過了overlord節點在druid03主機上,所以得向 druid03 主機提交任務,命令如下:
# curl -X "POST" -H "Content-Type:application/json" -d @wikipedia_index_local_csv_task.json druid03:8090/druid/indexer/v1/task
下面說一下如何從hdfs倒入csv和json格式的文件。
9 從hdfs中倒入數據到druid
9.1 導入hdfs中的json文件。
先需要把wikipedia_data.json ftp到到hdfs系統中,記住目錄然後在task文件中給定路徑,hdfs路徑中要帶有hdfs 的namenode的 名字或者ip。
這裡使用vm1.cci代替namenode的ip。注意對比與本地導入task文件的區別,這些區別決定你能否導入成功。
然後和從本地倒入數據的過程一樣,向overload提交任務即可。
ask.json 文件描述如下:
[html] view plain copy
- {
- "type" : "index_hadoop",
- "spec" : {
- "dataSchema" : {
- "dataSource" : "wikipedia",
- "parser" : {
- "type" : "string",
- "parseSpec" : {
- "format" : "json",
- "timestampSpec" : {
- "column" : "timestamp",
- "format" : "auto"
- },
- "dimensionsSpec" : {
- "dimensions": ["page","language","user","unpatrolled","newPage","robot","anonymous","namespace","continent","country","region","city"],
- "dimensionExclusions" : [],
- "spatialDimensions" : []
- }
- }
- },
- "metricsSpec" : [
- {
- "type" : "count",
- "name" : "count"
- },
- {
- "type" : "doubleSum",
- "name" : "added",
- "fieldName" : "added"
- },
- {
- "type" : "doubleSum",
- "name" : "deleted",
- "fieldName" : "deleted"
- },
- {
- "type" : "doubleSum",
- "name" : "delta",
- "fieldName" : "delta"
- }
- ],
- "granularitySpec" : {
- "type" : "uniform",
- "segmentGranularity" : "DAY",
- "queryGranularity" : "NONE",
- "intervals" : [ "2013-08-31/2013-09-01" ]
- }
- },
- "ioConfig" : {
- "type" : "hadoop",
- "inputSpec" : {
- "type" : "static",
- "paths" : "hdfs://vm1.cci/tmp/druid/datasource/wikipedia_data.json"
- }
- },
- "tuningConfig" : {
- "type": "hadoop"
- }
- }
- }
9.2 導入hdfs中的csv格式文件。
task.json 文件描述如下:
[html] view plain copy
- {
- "type": "index",
- "spec": {
- "dataSchema": {
- "dataSource": "wikipedia",
- "parser": {
- "type": "string",
- "parseSpec":
- {
- "format" : "csv",
- "timestampSpec" :
- {
- "column" : "timestamp"
- },
- "columns" : ["timestamp","page","language","user","unpatrolled","newPage","robot","anonymous","namespace","continent","country","region","city","added","deleted","delta"],
- "dimensionsSpec" :
- {
- "dimensions" : ["page","language","user","unpatrolled","newPage","robot","anonymous","namespace","continent","country","region","city"]
- }
- }
- },
- "metricsSpec": [
- {
- "type": "count",
- "name": "count"
- },
- {
- "type": "doubleSum",
- "name": "added",
- "fieldName": "added"
- },
- {
- "type": "doubleSum",
- "name": "deleted",
- "fieldName": "deleted"
- },
- {
- "type": "doubleSum",
- "name": "delta",
- "fieldName": "delta"
- }
- ],
- "granularitySpec": {
- "type": "uniform",
- "segmentGranularity": "DAY",
- "queryGranularity": "NONE",
- "intervals": ["2013-08-31/2013-09-01"]
- }
- },
- "ioConfig" : {
- "type" : "hadoop",
- "inputSpec" : {
- "type" : "static",
- "paths" : "hdfs://vm1.cci/tmp/druid/datasource/wikipedia_data.csv"
- }
- },
- "tuningConfig" : {
- "type": "hadoop"
- }
- }
- }
總結: druid.io 可以配置的項超級多,任何一個地方配置疏忽都可能會導致task失敗。
這裡給出四種示例,還是有必要細分其中的差別。初學者磕絆在此很難免。
注意: 如果你的druid extension用的hadoop 版本和目標的hadoop機器用的版本不一樣,則必須用druid自己帶的hadoop版本,否則hadoop Map Reduce任務起不來。 通過下面的兩個參數指定druid 自己帶的hadoop版本。本次druid自己帶的是2.7.3 hadoop版本。
"tuningConfig" : {
"type" : "hadoop",
"partitionsSpec" : {
"type" : "hashed",
"targetPartitionSize" : 5000000
},
"jobProperties" : {
"mapreduce.job.classloader":"true"
}
}
},
"hadoopDependencyCoordinates": [
"org.apache.hadoop:hadoop-client:2.7.3"
]
※C 配置文件存儲 各種序列化演算法性能比較
※MeanShift濾波演算法與實現
TAG:程序員小新人學習 |