-
Notifications
You must be signed in to change notification settings - Fork 1
Expand file tree
/
Copy pathREADME.txt
More file actions
37 lines (28 loc) · 1.16 KB
/
README.txt
File metadata and controls
37 lines (28 loc) · 1.16 KB
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
redismapper
===========
hadoop job to load redis from hdfs
this job will only load multi-maps using hset<primarykey,hashkey,value>
run with the following commands
required:
-redis <host:port>
-input <PROTOCOL://PATH to CSV>
-key <primarykey>
-hkey <hashkey>
-hval <hashval>
optional:
-db <integer for redis database, default is 0>
-pw <redis password, default is null>
-pkey <prefix to prepend to the primary key ex: foo would yield foo.key>
-hpkey <prefix to prepend to the hash key ex: foo would yield foo.hashkey>
-delim <delimiter to use between prefix and keys, default is \".\">
-kf <regex that will exclude records with matching primary keys>
-hf <regex that will exclude records with matching hash keys>
-vf <regex that will exclude records with matching hash values>
deploy redismapper-1.0-SNAPSHOT-job.jar
example:
a csv file with lines that look as follows:
12345,hello,world,67890,foo,bar,1.37901
23456,hello,world,67890,bad,bar,1.0
hadoop redismapper-1.1-job.jar -redis=localhost:6379 -input=/users/mydata -key=0 -hkey=4 -vkey=6 -hf=^bad -vf=1.0
the following would write 1 of the 2 records to redis in the following format:
hset(12345,foo,1.37901)