You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hadoop.apache.org by Mohit Vadhera <pr...@gmail.com> on 2013/04/19 08:12:14 UTC
jobtracker is stopping because of permissions
Can anybody help me to start jobtracker service. it is an urgent . it looks
permission issue .
What permission to give on which directory. I am pasting log for the same.
Service start and stops
2013-04-19 02:21:06,388 FATAL org.apache.hadoop.mapred.JobTracker:
org.apache.hadoop.security.AccessControlException: Permission denied:
user=mapred, access=WRITE, inode="/mnt/san1":aye:hadmin:drwxr
-xr-x
at
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
at
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:186)
at
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:135)
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4547)
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkAncestorAccess(FSNamesystem.java:4518)
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInternal(FSNamesystem.java:2880)
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:2844)
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:2823)
at
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:639)
at
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:417)
at
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44096)
at
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:898)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1693)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1689)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1332)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1687)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
Method)
at
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
Thanks,
Re: jobtracker is stopping because of permissions
Posted by Hemanth Yamijala <yh...@thoughtworks.com>.
/mnt/san1 - owned by aye, hadmin and user mapred is trying to write to this
directory. Can you look at your core-, hdfs- and mapred-site.xml to see
where /mnt/san1 is configured as a value - that might make it more clear
what needs to be changed.
I suspect this could be one of the system directories that the JobTracker
has to manage on HDFS to run jobs.
Thanks
Hemanth
On Fri, Apr 19, 2013 at 11:42 AM, Mohit Vadhera <
project.linux.proj@gmail.com> wrote:
> Can anybody help me to start jobtracker service. it is an urgent . it
> looks permission issue .
> What permission to give on which directory. I am pasting log for the same.
> Service start and stops
>
> 2013-04-19 02:21:06,388 FATAL org.apache.hadoop.mapred.JobTracker:
> org.apache.hadoop.security.AccessControlException: Permission denied:
> user=mapred, access=WRITE, inode="/mnt/san1":aye:hadmin:drwxr
> -xr-x
> at
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
> at
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:186)
> at
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:135)
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4547)
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkAncestorAccess(FSNamesystem.java:4518)
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInternal(FSNamesystem.java:2880)
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:2844)
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:2823)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:639)
> at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:417)
> at
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44096)
> at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:898)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1693)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1689)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
> at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1332)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1687)
>
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
> Method)
> at
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
>
>
> Thanks,
>
Re: jobtracker is stopping because of permissions
Posted by Hemanth Yamijala <yh...@thoughtworks.com>.
/mnt/san1 - owned by aye, hadmin and user mapred is trying to write to this
directory. Can you look at your core-, hdfs- and mapred-site.xml to see
where /mnt/san1 is configured as a value - that might make it more clear
what needs to be changed.
I suspect this could be one of the system directories that the JobTracker
has to manage on HDFS to run jobs.
Thanks
Hemanth
On Fri, Apr 19, 2013 at 11:42 AM, Mohit Vadhera <
project.linux.proj@gmail.com> wrote:
> Can anybody help me to start jobtracker service. it is an urgent . it
> looks permission issue .
> What permission to give on which directory. I am pasting log for the same.
> Service start and stops
>
> 2013-04-19 02:21:06,388 FATAL org.apache.hadoop.mapred.JobTracker:
> org.apache.hadoop.security.AccessControlException: Permission denied:
> user=mapred, access=WRITE, inode="/mnt/san1":aye:hadmin:drwxr
> -xr-x
> at
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
> at
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:186)
> at
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:135)
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4547)
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkAncestorAccess(FSNamesystem.java:4518)
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInternal(FSNamesystem.java:2880)
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:2844)
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:2823)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:639)
> at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:417)
> at
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44096)
> at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:898)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1693)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1689)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
> at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1332)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1687)
>
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
> Method)
> at
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
>
>
> Thanks,
>
Re: jobtracker is stopping because of permissions
Posted by Hemanth Yamijala <yh...@thoughtworks.com>.
/mnt/san1 - owned by aye, hadmin and user mapred is trying to write to this
directory. Can you look at your core-, hdfs- and mapred-site.xml to see
where /mnt/san1 is configured as a value - that might make it more clear
what needs to be changed.
I suspect this could be one of the system directories that the JobTracker
has to manage on HDFS to run jobs.
Thanks
Hemanth
On Fri, Apr 19, 2013 at 11:42 AM, Mohit Vadhera <
project.linux.proj@gmail.com> wrote:
> Can anybody help me to start jobtracker service. it is an urgent . it
> looks permission issue .
> What permission to give on which directory. I am pasting log for the same.
> Service start and stops
>
> 2013-04-19 02:21:06,388 FATAL org.apache.hadoop.mapred.JobTracker:
> org.apache.hadoop.security.AccessControlException: Permission denied:
> user=mapred, access=WRITE, inode="/mnt/san1":aye:hadmin:drwxr
> -xr-x
> at
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
> at
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:186)
> at
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:135)
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4547)
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkAncestorAccess(FSNamesystem.java:4518)
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInternal(FSNamesystem.java:2880)
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:2844)
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:2823)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:639)
> at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:417)
> at
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44096)
> at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:898)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1693)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1689)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
> at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1332)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1687)
>
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
> Method)
> at
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
>
>
> Thanks,
>
Re: jobtracker is stopping because of permissions
Posted by Hemanth Yamijala <yh...@thoughtworks.com>.
/mnt/san1 - owned by aye, hadmin and user mapred is trying to write to this
directory. Can you look at your core-, hdfs- and mapred-site.xml to see
where /mnt/san1 is configured as a value - that might make it more clear
what needs to be changed.
I suspect this could be one of the system directories that the JobTracker
has to manage on HDFS to run jobs.
Thanks
Hemanth
On Fri, Apr 19, 2013 at 11:42 AM, Mohit Vadhera <
project.linux.proj@gmail.com> wrote:
> Can anybody help me to start jobtracker service. it is an urgent . it
> looks permission issue .
> What permission to give on which directory. I am pasting log for the same.
> Service start and stops
>
> 2013-04-19 02:21:06,388 FATAL org.apache.hadoop.mapred.JobTracker:
> org.apache.hadoop.security.AccessControlException: Permission denied:
> user=mapred, access=WRITE, inode="/mnt/san1":aye:hadmin:drwxr
> -xr-x
> at
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
> at
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:186)
> at
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:135)
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4547)
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkAncestorAccess(FSNamesystem.java:4518)
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInternal(FSNamesystem.java:2880)
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:2844)
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:2823)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:639)
> at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:417)
> at
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44096)
> at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:898)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1693)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1689)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
> at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1332)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1687)
>
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
> Method)
> at
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
>
>
> Thanks,
>
Re: Mapreduce
Posted by 정종운 <sh...@gmail.com>.
http://hbase.apache.org/book/mapreduce.example.html
2013/4/20 Hemanth Yamijala <yh...@thoughtworks.com>
> As this is a HBase specific question, it will be better to ask this
> question on the HBase user mailing list.
>
> Thanks
> Hemanth
>
>
> On Fri, Apr 19, 2013 at 10:46 PM, Adrian Acosta Mitjans <
> amitjans@estudiantes.uci.cu> wrote:
>
>> Hello:
>>
>> I'm working in a proyect, and i'm using hbase for storage the data, y
>> have this method that work great but without the performance i'm looking
>> for, so i want is to make the same but using mapreduce.
>>
>>
>> public ArrayList<MyObject> findZ(String z) throws IOException {
>>
>> ArrayList<MyObject> rows = new ArrayList<MyObject>();
>> Configuration conf = HBaseConfiguration.create();
>> HTable table = new HTable(conf, "test");
>> Scan s = new Scan();
>> s.addColumn(Bytes.toBytes("x"), Bytes.toBytes("y"));
>> ResultScanner scanner = table.getScanner(s);
>> try {
>> for (Result rr : scanner) {
>> if (Bytes.toString(rr.getValue(Bytes.toBytes("x"),
>> Bytes.toBytes("y"))).equals(z)) {
>> rows.add(getInformation(Bytes.toString(rr.getRow())));
>> }
>> }
>> } finally {
>> scanner.close();
>> }
>> return archivos;
>> }
>>
>> The getInformation method take all the columns and convert the row in
>> MyObject type.
>>
>> I just want a example or a link to a tutorial that make something like
>> this, i want to get a result type as answer and not a number to count
>> words, like many a found.
>>
>> My natural language is spanish, so sorry if something is not well writing.
>>
>> Thanks.
>> <http://www.uci.cu/>
>>
>>
>
Re: Mapreduce
Posted by 정종운 <sh...@gmail.com>.
http://hbase.apache.org/book/mapreduce.example.html
2013/4/20 Hemanth Yamijala <yh...@thoughtworks.com>
> As this is a HBase specific question, it will be better to ask this
> question on the HBase user mailing list.
>
> Thanks
> Hemanth
>
>
> On Fri, Apr 19, 2013 at 10:46 PM, Adrian Acosta Mitjans <
> amitjans@estudiantes.uci.cu> wrote:
>
>> Hello:
>>
>> I'm working in a proyect, and i'm using hbase for storage the data, y
>> have this method that work great but without the performance i'm looking
>> for, so i want is to make the same but using mapreduce.
>>
>>
>> public ArrayList<MyObject> findZ(String z) throws IOException {
>>
>> ArrayList<MyObject> rows = new ArrayList<MyObject>();
>> Configuration conf = HBaseConfiguration.create();
>> HTable table = new HTable(conf, "test");
>> Scan s = new Scan();
>> s.addColumn(Bytes.toBytes("x"), Bytes.toBytes("y"));
>> ResultScanner scanner = table.getScanner(s);
>> try {
>> for (Result rr : scanner) {
>> if (Bytes.toString(rr.getValue(Bytes.toBytes("x"),
>> Bytes.toBytes("y"))).equals(z)) {
>> rows.add(getInformation(Bytes.toString(rr.getRow())));
>> }
>> }
>> } finally {
>> scanner.close();
>> }
>> return archivos;
>> }
>>
>> The getInformation method take all the columns and convert the row in
>> MyObject type.
>>
>> I just want a example or a link to a tutorial that make something like
>> this, i want to get a result type as answer and not a number to count
>> words, like many a found.
>>
>> My natural language is spanish, so sorry if something is not well writing.
>>
>> Thanks.
>> <http://www.uci.cu/>
>>
>>
>
Re: Mapreduce
Posted by 정종운 <sh...@gmail.com>.
http://hbase.apache.org/book/mapreduce.example.html
2013/4/20 Hemanth Yamijala <yh...@thoughtworks.com>
> As this is a HBase specific question, it will be better to ask this
> question on the HBase user mailing list.
>
> Thanks
> Hemanth
>
>
> On Fri, Apr 19, 2013 at 10:46 PM, Adrian Acosta Mitjans <
> amitjans@estudiantes.uci.cu> wrote:
>
>> Hello:
>>
>> I'm working in a proyect, and i'm using hbase for storage the data, y
>> have this method that work great but without the performance i'm looking
>> for, so i want is to make the same but using mapreduce.
>>
>>
>> public ArrayList<MyObject> findZ(String z) throws IOException {
>>
>> ArrayList<MyObject> rows = new ArrayList<MyObject>();
>> Configuration conf = HBaseConfiguration.create();
>> HTable table = new HTable(conf, "test");
>> Scan s = new Scan();
>> s.addColumn(Bytes.toBytes("x"), Bytes.toBytes("y"));
>> ResultScanner scanner = table.getScanner(s);
>> try {
>> for (Result rr : scanner) {
>> if (Bytes.toString(rr.getValue(Bytes.toBytes("x"),
>> Bytes.toBytes("y"))).equals(z)) {
>> rows.add(getInformation(Bytes.toString(rr.getRow())));
>> }
>> }
>> } finally {
>> scanner.close();
>> }
>> return archivos;
>> }
>>
>> The getInformation method take all the columns and convert the row in
>> MyObject type.
>>
>> I just want a example or a link to a tutorial that make something like
>> this, i want to get a result type as answer and not a number to count
>> words, like many a found.
>>
>> My natural language is spanish, so sorry if something is not well writing.
>>
>> Thanks.
>> <http://www.uci.cu/>
>>
>>
>
Re: Mapreduce
Posted by 정종운 <sh...@gmail.com>.
http://hbase.apache.org/book/mapreduce.example.html
2013/4/20 Hemanth Yamijala <yh...@thoughtworks.com>
> As this is a HBase specific question, it will be better to ask this
> question on the HBase user mailing list.
>
> Thanks
> Hemanth
>
>
> On Fri, Apr 19, 2013 at 10:46 PM, Adrian Acosta Mitjans <
> amitjans@estudiantes.uci.cu> wrote:
>
>> Hello:
>>
>> I'm working in a proyect, and i'm using hbase for storage the data, y
>> have this method that work great but without the performance i'm looking
>> for, so i want is to make the same but using mapreduce.
>>
>>
>> public ArrayList<MyObject> findZ(String z) throws IOException {
>>
>> ArrayList<MyObject> rows = new ArrayList<MyObject>();
>> Configuration conf = HBaseConfiguration.create();
>> HTable table = new HTable(conf, "test");
>> Scan s = new Scan();
>> s.addColumn(Bytes.toBytes("x"), Bytes.toBytes("y"));
>> ResultScanner scanner = table.getScanner(s);
>> try {
>> for (Result rr : scanner) {
>> if (Bytes.toString(rr.getValue(Bytes.toBytes("x"),
>> Bytes.toBytes("y"))).equals(z)) {
>> rows.add(getInformation(Bytes.toString(rr.getRow())));
>> }
>> }
>> } finally {
>> scanner.close();
>> }
>> return archivos;
>> }
>>
>> The getInformation method take all the columns and convert the row in
>> MyObject type.
>>
>> I just want a example or a link to a tutorial that make something like
>> this, i want to get a result type as answer and not a number to count
>> words, like many a found.
>>
>> My natural language is spanish, so sorry if something is not well writing.
>>
>> Thanks.
>> <http://www.uci.cu/>
>>
>>
>
Re: Mapreduce
Posted by Hemanth Yamijala <yh...@thoughtworks.com>.
As this is a HBase specific question, it will be better to ask this
question on the HBase user mailing list.
Thanks
Hemanth
On Fri, Apr 19, 2013 at 10:46 PM, Adrian Acosta Mitjans <
amitjans@estudiantes.uci.cu> wrote:
> Hello:
>
> I'm working in a proyect, and i'm using hbase for storage the data, y have
> this method that work great but without the performance i'm looking for, so
> i want is to make the same but using mapreduce.
>
>
> public ArrayList<MyObject> findZ(String z) throws IOException {
>
> ArrayList<MyObject> rows = new ArrayList<MyObject>();
> Configuration conf = HBaseConfiguration.create();
> HTable table = new HTable(conf, "test");
> Scan s = new Scan();
> s.addColumn(Bytes.toBytes("x"), Bytes.toBytes("y"));
> ResultScanner scanner = table.getScanner(s);
> try {
> for (Result rr : scanner) {
> if (Bytes.toString(rr.getValue(Bytes.toBytes("x"),
> Bytes.toBytes("y"))).equals(z)) {
> rows.add(getInformation(Bytes.toString(rr.getRow())));
> }
> }
> } finally {
> scanner.close();
> }
> return archivos;
> }
>
> The getInformation method take all the columns and convert the row in
> MyObject type.
>
> I just want a example or a link to a tutorial that make something like
> this, i want to get a result type as answer and not a number to count
> words, like many a found.
>
> My natural language is spanish, so sorry if something is not well writing.
>
> Thanks.
> <http://www.uci.cu/>
>
>
Re: Mapreduce
Posted by Hemanth Yamijala <yh...@thoughtworks.com>.
As this is a HBase specific question, it will be better to ask this
question on the HBase user mailing list.
Thanks
Hemanth
On Fri, Apr 19, 2013 at 10:46 PM, Adrian Acosta Mitjans <
amitjans@estudiantes.uci.cu> wrote:
> Hello:
>
> I'm working in a proyect, and i'm using hbase for storage the data, y have
> this method that work great but without the performance i'm looking for, so
> i want is to make the same but using mapreduce.
>
>
> public ArrayList<MyObject> findZ(String z) throws IOException {
>
> ArrayList<MyObject> rows = new ArrayList<MyObject>();
> Configuration conf = HBaseConfiguration.create();
> HTable table = new HTable(conf, "test");
> Scan s = new Scan();
> s.addColumn(Bytes.toBytes("x"), Bytes.toBytes("y"));
> ResultScanner scanner = table.getScanner(s);
> try {
> for (Result rr : scanner) {
> if (Bytes.toString(rr.getValue(Bytes.toBytes("x"),
> Bytes.toBytes("y"))).equals(z)) {
> rows.add(getInformation(Bytes.toString(rr.getRow())));
> }
> }
> } finally {
> scanner.close();
> }
> return archivos;
> }
>
> The getInformation method take all the columns and convert the row in
> MyObject type.
>
> I just want a example or a link to a tutorial that make something like
> this, i want to get a result type as answer and not a number to count
> words, like many a found.
>
> My natural language is spanish, so sorry if something is not well writing.
>
> Thanks.
> <http://www.uci.cu/>
>
>
Re: Mapreduce
Posted by Hemanth Yamijala <yh...@thoughtworks.com>.
As this is a HBase specific question, it will be better to ask this
question on the HBase user mailing list.
Thanks
Hemanth
On Fri, Apr 19, 2013 at 10:46 PM, Adrian Acosta Mitjans <
amitjans@estudiantes.uci.cu> wrote:
> Hello:
>
> I'm working in a proyect, and i'm using hbase for storage the data, y have
> this method that work great but without the performance i'm looking for, so
> i want is to make the same but using mapreduce.
>
>
> public ArrayList<MyObject> findZ(String z) throws IOException {
>
> ArrayList<MyObject> rows = new ArrayList<MyObject>();
> Configuration conf = HBaseConfiguration.create();
> HTable table = new HTable(conf, "test");
> Scan s = new Scan();
> s.addColumn(Bytes.toBytes("x"), Bytes.toBytes("y"));
> ResultScanner scanner = table.getScanner(s);
> try {
> for (Result rr : scanner) {
> if (Bytes.toString(rr.getValue(Bytes.toBytes("x"),
> Bytes.toBytes("y"))).equals(z)) {
> rows.add(getInformation(Bytes.toString(rr.getRow())));
> }
> }
> } finally {
> scanner.close();
> }
> return archivos;
> }
>
> The getInformation method take all the columns and convert the row in
> MyObject type.
>
> I just want a example or a link to a tutorial that make something like
> this, i want to get a result type as answer and not a number to count
> words, like many a found.
>
> My natural language is spanish, so sorry if something is not well writing.
>
> Thanks.
> <http://www.uci.cu/>
>
>
Re: Mapreduce
Posted by Hemanth Yamijala <yh...@thoughtworks.com>.
As this is a HBase specific question, it will be better to ask this
question on the HBase user mailing list.
Thanks
Hemanth
On Fri, Apr 19, 2013 at 10:46 PM, Adrian Acosta Mitjans <
amitjans@estudiantes.uci.cu> wrote:
> Hello:
>
> I'm working in a proyect, and i'm using hbase for storage the data, y have
> this method that work great but without the performance i'm looking for, so
> i want is to make the same but using mapreduce.
>
>
> public ArrayList<MyObject> findZ(String z) throws IOException {
>
> ArrayList<MyObject> rows = new ArrayList<MyObject>();
> Configuration conf = HBaseConfiguration.create();
> HTable table = new HTable(conf, "test");
> Scan s = new Scan();
> s.addColumn(Bytes.toBytes("x"), Bytes.toBytes("y"));
> ResultScanner scanner = table.getScanner(s);
> try {
> for (Result rr : scanner) {
> if (Bytes.toString(rr.getValue(Bytes.toBytes("x"),
> Bytes.toBytes("y"))).equals(z)) {
> rows.add(getInformation(Bytes.toString(rr.getRow())));
> }
> }
> } finally {
> scanner.close();
> }
> return archivos;
> }
>
> The getInformation method take all the columns and convert the row in
> MyObject type.
>
> I just want a example or a link to a tutorial that make something like
> this, i want to get a result type as answer and not a number to count
> words, like many a found.
>
> My natural language is spanish, so sorry if something is not well writing.
>
> Thanks.
> <http://www.uci.cu/>
>
>
Mapreduce
Posted by Adrian Acosta Mitjans <am...@estudiantes.uci.cu>.
Hello:
I'm working in a proyect, and i'm using hbase for storage the data, y have this method that work great but without the performance i'm looking for, so i want is to make the same but using mapreduce.
public ArrayList<MyObject> findZ(String z) throws IOException {
ArrayList<MyObject> rows = new ArrayList<MyObject>();
Configuration conf = HBaseConfiguration.create();
HTable table = new HTable(conf, "test");
Scan s = new Scan();
s.addColumn(Bytes.toBytes("x"), Bytes.toBytes("y"));
ResultScanner scanner = table.getScanner(s);
try {
for (Result rr : scanner) {
if (Bytes.toString(rr.getValue(Bytes.toBytes("x"), Bytes.toBytes("y"))).equals(z)) {
rows.add(getInformation(Bytes.toString(rr.getRow())));
}
}
} finally {
scanner.close();
}
return archivos;
}
The getInformation method take all the columns and convert the row in MyObject type.
I just want a example or a link to a tutorial that make something like this, i want to get a result type as answer and not a number to count words, like many a found.
My natural language is spanish, so sorry if something is not well writing.
Thanks.
http://www.uci.cu
Mapreduce
Posted by Adrian Acosta Mitjans <am...@estudiantes.uci.cu>.
Hello:
I'm working in a proyect, and i'm using hbase for storage the data, y have this method that work great but without the performance i'm looking for, so i want is to make the same but using mapreduce.
public ArrayList<MyObject> findZ(String z) throws IOException {
ArrayList<MyObject> rows = new ArrayList<MyObject>();
Configuration conf = HBaseConfiguration.create();
HTable table = new HTable(conf, "test");
Scan s = new Scan();
s.addColumn(Bytes.toBytes("x"), Bytes.toBytes("y"));
ResultScanner scanner = table.getScanner(s);
try {
for (Result rr : scanner) {
if (Bytes.toString(rr.getValue(Bytes.toBytes("x"), Bytes.toBytes("y"))).equals(z)) {
rows.add(getInformation(Bytes.toString(rr.getRow())));
}
}
} finally {
scanner.close();
}
return archivos;
}
The getInformation method take all the columns and convert the row in MyObject type.
I just want a example or a link to a tutorial that make something like this, i want to get a result type as answer and not a number to count words, like many a found.
My natural language is spanish, so sorry if something is not well writing.
Thanks.
http://www.uci.cu
Mapreduce
Posted by Adrian Acosta Mitjans <am...@estudiantes.uci.cu>.
Hello:
I'm working in a proyect, and i'm using hbase for storage the data, y have this method that work great but without the performance i'm looking for, so i want is to make the same but using mapreduce.
public ArrayList<MyObject> findZ(String z) throws IOException {
ArrayList<MyObject> rows = new ArrayList<MyObject>();
Configuration conf = HBaseConfiguration.create();
HTable table = new HTable(conf, "test");
Scan s = new Scan();
s.addColumn(Bytes.toBytes("x"), Bytes.toBytes("y"));
ResultScanner scanner = table.getScanner(s);
try {
for (Result rr : scanner) {
if (Bytes.toString(rr.getValue(Bytes.toBytes("x"), Bytes.toBytes("y"))).equals(z)) {
rows.add(getInformation(Bytes.toString(rr.getRow())));
}
}
} finally {
scanner.close();
}
return archivos;
}
The getInformation method take all the columns and convert the row in MyObject type.
I just want a example or a link to a tutorial that make something like this, i want to get a result type as answer and not a number to count words, like many a found.
My natural language is spanish, so sorry if something is not well writing.
Thanks.
http://www.uci.cu
Mapreduce
Posted by Adrian Acosta Mitjans <am...@estudiantes.uci.cu>.
Hello:
I'm working in a proyect, and i'm using hbase for storage the data, y have this method that work great but without the performance i'm looking for, so i want is to make the same but using mapreduce.
public ArrayList<MyObject> findZ(String z) throws IOException {
ArrayList<MyObject> rows = new ArrayList<MyObject>();
Configuration conf = HBaseConfiguration.create();
HTable table = new HTable(conf, "test");
Scan s = new Scan();
s.addColumn(Bytes.toBytes("x"), Bytes.toBytes("y"));
ResultScanner scanner = table.getScanner(s);
try {
for (Result rr : scanner) {
if (Bytes.toString(rr.getValue(Bytes.toBytes("x"), Bytes.toBytes("y"))).equals(z)) {
rows.add(getInformation(Bytes.toString(rr.getRow())));
}
}
} finally {
scanner.close();
}
return archivos;
}
The getInformation method take all the columns and convert the row in MyObject type.
I just want a example or a link to a tutorial that make something like this, i want to get a result type as answer and not a number to count words, like many a found.
My natural language is spanish, so sorry if something is not well writing.
Thanks.
http://www.uci.cu