[Image 1] -> https://i.sstatic.net/82VBi7RT.png
[Image 2] -> https://i.sstatic.net/zOjcpST5.png
[Image 3] -> https://i.sstatic.net/9xrrQTKN.png
[Image 4] -> https://i.sstatic.net/7aJbBKeK.png
[Image 5] -> https://i.sstatic.net/nf93fPN8.png
stderr: /var/lib/ambari-agent/data/errors-997.txt
Traceback (most recent call last):
File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 150, in _call_wrapper
result = _call(command, **kwargs_copy)
File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 314, in _call
raise ExecutionFailed(err_msg, code, out, err)
ExecutionFailed: Execution of 'cat /var/lib/ambari-agent/tmp/atlas_hbase_setup.rb | hbase shell -n' returned 1. SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/odp/1.2.2.0-46/hadoop/lib/slf4j-reload4j-1.7.36.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/odp/1.2.2.0-46/hadoop-mapreduce/lib/slf4j-reload4j-1.7.36.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/odp/1.2.2.0-46/tez/lib/slf4j-reload4j-1.7.36.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/odp/1.2.2.0-46/phoenix/lib/phoenix-client-hbase-2.5-5.1.3.1.2.2.0-46.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/odp/1.2.2.0-46/phoenix/lib/phoenix-client-embedded-hbase-2.5-5.1.3.1.2.2.0-46.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/odp/1.2.2.0-46/phoenix/lib/phoenix-client-embedded-hbase-2.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/odp/1.2.2.0-46/phoenix/lib/phoenix-client-hbase-2.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/odp/1.2.2.0-46/hbase/lib/client-facing-thirdparty/log4j-slf4j-impl-2.17.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Reload4jLoggerFactory]
2024-10-18 20:10:17,895 INFO [main] Configuration.deprecation: hbase.client.pause.cqtbe is deprecated. Instead, use hbase.client.pause.server.overloaded
2024-10-18 20:10:18,139 INFO [ReadOnlyZKClient-master1.bank.net:2181@0x1ed9d173] zookeeper.ZooKeeper: Client environment:zookeeper.version=3.8.3-1.2.2.0-46-5734065866741497287e0f17679df3b790dd7acc-dirty, built on 2024-01-10 09:59 UTC
2024-10-18 20:10:18,174 INFO [ReadOnlyZKClient-master1.bank.net:2181@0x1ed9d173] zookeeper.ZooKeeper: Client environment:host.name=master1.bank.net
2024-10-18 20:10:18,174 INFO [ReadOnlyZKClient-master1.bank.net:2181@0x1ed9d173] zookeeper.ZooKeeper: Client environment:java.version=1.8.0_391
2024-10-18 20:10:18,174 INFO [ReadOnlyZKClient-master1.bank.net:2181@0x1ed9d173] zookeeper.ZooKeeper: Client environment:java.vendor=Oracle Corporation
2024-10-18 20:10:18,174 INFO [ReadOnlyZKClient-master1.bank.net:2181@0x1ed9d173] zookeeper.ZooKeeper: Client environment:java.home=/usr/jdk64/jdk1.8.0_391/jre
2024-10-18 20:10:18,174 INFO [ReadOnlyZKClient-master1.bank.net:2181@0x1ed9d173] zookeeper.ZooKeeper: Client environment:java.class.path=/usr/odp/1.2.2.0-46/hbase/lib/ruby/jruby-complete-9.2.13.0.jar:/usr/odp/1.2.2.0-46/hbase/conf:/usr/jdk64/jdk1.8.0-...........................ETC......................................pherf-5.1.3.1.2.2.0-46.jar:/usr/odp/1.2.2.0-46/phoenix/lib/phoenix-hbase-compat-2.1.6-5.1.3.1.2.2.0-46.jar:/usr/odp/1.2.2.0-46/phoenix/lib/phoenix-hbase-compat-2.2.5-5.1.3.1.2.2.0-46.jar:/usr/odp/1.2.2.0-46/phoenix/lib/phoenix-tracing-webapp-5.1.3.1.2.2.0-46-sources.jar::/usr/odp/1.2.2.0-46/hbase/lib/client-facing-thirdparty/log4j-1.2-api-2.17.2.jar:/usr/odp/1.2.2.0-46/hbase/lib/client-facing-thirdparty/log4j-api-2.17.2.jar:/usr/odp/1.2.2.0-46/hbase/lib/client-facing-thirdparty/log4j-core-2.17.2.jar:/usr/odp/1.2.2.0-46/hbase/lib/client-facing-thirdparty/log4j-slf4j-impl-2.17.2.jar
2024-10-18 20:12:36,400 INFO [ReadOnlyZKClient-master1.bank.net:2181@0x05a466dd] zookeeper.ZooKeeper: Client environment:java.library.path=:/usr/odp/1.2.2.0-46/hadoop/lib/native/Linux-amd64-64:/usr/odp/1.2.2.0-46/hadoop/lib/native
2024-10-18 20:12:36,400 INFO [ReadOnlyZKClient-master1.bank.net:2181@0x05a466dd] zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/tmp
2024-10-18 20:12:36,400 INFO [ReadOnlyZKClient-master1.bank.net:2181@0x05a466dd] zookeeper.ZooKeeper: Client environment:java.compiler=
2024-10-18 20:12:36,400 INFO [ReadOnlyZKClient-master1.bank.net:2181@0x05a466dd] zookeeper.ZooKeeper: Client environment:os.name=Linux
2024-10-18 20:12:36,400 INFO [ReadOnlyZKClient-master1.bank.net:2181@0x05a466dd] zookeeper.ZooKeeper: Client environment:os.arch=amd64
2024-10-18 20:12:36,400 INFO [ReadOnlyZKClient-master1.bank.net:2181@0x05a466dd] zookeeper.ZooKeeper: Client environment:os.version=6.8.0-45-generic
2024-10-18 20:12:36,400 INFO [ReadOnlyZKClient-master1.bank.net:2181@0x05a466dd] zookeeper.ZooKeeper: Client environment:user.name=hbase
2024-10-18 20:12:36,400 INFO [ReadOnlyZKClient-master1.bank.net:2181@0x05a466dd] zookeeper.ZooKeeper: Client environment:user.home=/home/hbase
2024-10-18 20:12:36,400 INFO [ReadOnlyZKClient-master1.bank.net:2181@0x05a466dd] zookeeper.ZooKeeper: Client environment:user.dir=/home/hbase
2024-10-18 20:12:36,400 INFO [ReadOnlyZKClient-master1.bank.net:2181@0x05a466dd] zookeeper.ZooKeeper: Client environment:os.memory.free=5348MB
2024-10-18 20:12:36,400 INFO [ReadOnlyZKClient-master1.bank.net:2181@0x05a466dd] zookeeper.ZooKeeper: Client environment:os.memory.max=5978MB
2024-10-18 20:12:36,400 INFO [ReadOnlyZKClient-master1.bank.net:2181@0x05a466dd] zookeeper.ZooKeeper: Client environment:os.memory.total=5594MB
2024-10-18 20:12:36,412 INFO [ReadOnlyZKClient-master1.bank.net:2181@0x05a466dd] zookeeper.ZooKeeper: Initiating client connection, connectString=master1.bank.net:2181 sessionTimeout=90000 watcher=org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$232/400681957@28818bd5
2024-10-18 20:12:36,462 INFO [ReadOnlyZKClient-master1.bank.net:2181@0x05a466dd] common.X509Util: Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation
2024-10-18 20:12:36,486 INFO [ReadOnlyZKClient-master1.bank.net:2181@0x05a466dd] zookeeper.ClientCnxnSocket: jute.maxbuffer value is 1048575 Bytes
2024-10-18 20:12:36,492 INFO [ReadOnlyZKClient-master1.bank.net:2181@0x05a466dd] zookeeper.ClientCnxn: zookeeper.request.timeout value is 0. feature enabled=false
2024-10-18 20:12:36,707 INFO [ReadOnlyZKClient-master1.bank.net:2181@0x05a466dd-SendThread(master1.bank.net:2181)] zookeeper.ClientCnxn: Opening socket connection to server master1.bank.net/10.0.2.15:2181.
2024-10-18 20:12:36,746 INFO [ReadOnlyZKClient-master1.bank.net:2181@0x05a466dd-SendThread(master1.bank.net:2181)] zookeeper.ClientCnxn: SASL config status: Will not attempt to authenticate using SASL (unknown error)
2024-10-18 20:12:36,755 INFO [ReadOnlyZKClient-master1.bank.net:2181@0x05a466dd-SendThread(master1.bank.net:2181)] zookeeper.ClientCnxn: Socket connection established, initiating session, client: /10.0.2.15:46398, server: master1.bank.net/10.0.2.15:2181
2024-10-18 20:12:36,769 INFO [ReadOnlyZKClient-master1.bank.net:2181@0x05a466dd-SendThread(master1.bank.net:2181)] zookeeper.ClientCnxn: Session establishment complete on server master1.bank.net/10.0.2.15:2181, session id = 0x10000b21e3d0068, negotiated timeout = 60000
2024-10-18 20:13:37,421 INFO [ReadOnlyZKClient-master1.bank.net:2181@0x05a466dd-EventThread] zookeeper.ClientCnxn: EventThread shut down for session: 0x10000b21e3d0068
2024-10-18 20:13:37,421 INFO [ReadOnlyZKClient-master1.bank.net:2181@0x05a466dd] zookeeper.ZooKeeper: Session: 0x10000b21e3d0068 closed
hbase:001:0>
hbase:002:0> _tbl_titan = 'atlas_janus'
=> "atlas_janus"
hbase:003:0> _tbl_audit = 'ATLAS_ENTITY_AUDIT_EVENTS'
=> "ATLAS_ENTITY_AUDIT_EVENTS"
hbase:004:0> _usr_atlas = 'atlas'
=> "atlas"
hbase:005:0>
hbase:006:0>
hbase:007:0> if not list.include? _tbl_titan
begin
create _tbl_titan,{NAME => 'e',DATA_BLOCK_ENCODING => 'FAST_DIFF', COMPRESSION =>'GZ', BLOOMFILTER =>'ROW'},{NAME => 'g',DATA_BLOCK_ENCODING => 'FAST_DIFF', COMPRESSION =>'GZ', BLOOMFILTER =>'ROW'},{NAME => 'i',DATA_BLOCK_ENCODING => 'FAST_DIFF', COMPRESSION =>'GZ', BLOOMFILTER =>'ROW'},{NAME => 's',DATA_BLOCK_ENCODING => 'FAST_DIFF', COMPRESSION =>'GZ', BLOOMFILTER =>'ROW'},{NAME => 'm',DATA_BLOCK_ENCODING => 'FAST_DIFF', COMPRESSION =>'GZ', BLOOMFILTER =>'ROW'},{NAME => 'l',DATA_BLOCK_ENCODING => 'FAST_DIFF', COMPRESSION =>'GZ', BLOOMFILTER =>'ROW', TTL => 604800, KEEP_DELETED_CELLS =>false}
rescue RuntimeError => e
raise e if not e.message.include? "Table already exists"
end
end
TABLE
2024-10-18 20:13:46,956 INFO [ReadOnlyZKClient-master1.bank.net:2181@0x05a466dd] zookeeper.ZooKeeper: Initiating client connection, connectString=master1.bank.net:2181 sessionTimeout=90000 watcher=org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$232/400681957@28818bd5
2024-10-18 20:13:46,957 INFO [ReadOnlyZKClient-master1.bank.net:2181@0x05a466dd] zookeeper.ClientCnxnSocket: jute.maxbuffer value is 1048575 Bytes
2024-10-18 20:13:46,957 INFO [ReadOnlyZKClient-master1.bank.net:2181@0x05a466dd] zookeeper.ClientCnxn: zookeeper.request.timeout value is 0. feature enabled=false
2024-10-18 20:13:46,973 INFO [ReadOnlyZKClient-master1.bank.net:2181@0x05a466dd-SendThread(master1.bank.net:2181)] zookeeper.ClientCnxn: Opening socket connection to server master1.bank.net/10.0.2.15:2181.
2024-10-18 20:13:46,973 INFO [ReadOnlyZKClient-master1.bank.net:2181@0x05a466dd-SendThread(master1.bank.net:2181)] zookeeper.ClientCnxn: SASL config status: Will not attempt to authenticate using SASL (unknown error)
2024-10-18 20:13:46,994 INFO [ReadOnlyZKClient-master1.bank.net:2181@0x05a466dd-SendThread(master1.bank.net:2181)] zookeeper.ClientCnxn: Socket connection established, initiating session, client: /10.0.2.15:37486, server: master1.bank.net/10.0.2.15:2181
2024-10-18 20:13:47,025 INFO [ReadOnlyZKClient-master1.bank.net:2181@0x05a466dd-SendThread(master1.bank.net:2181)] zookeeper.ClientCnxn: Session establishment complete on server master1.bank.net/10.0.2.15:2181, session id = 0x10000b21e3d0069, negotiated timeout = 60000
Took 0.3817 secondsUnhandled Java exception: org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = NoNode for /hbase-unsecure/master
org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = NoNode for /hbase-unsecure/master
create at org/apache/zookeeper/KeeperException.java:118
create at org/apache/zookeeper/KeeperException.java:54
exec at org/apache/hadoop/hbase/zookeeper/ReadOnlyZKClient.java:174
run at org/apache/hadoop/hbase/zookeeper/ReadOnlyZKClient.java:344
run at java/lang/Thread.java:750
root@master1:~# jps
297349 Jps
141539 JobHistoryServer
110944 jar
223650 ResourceManager
142469 SecondaryNameNode
139562 HistoryServer
137195 NameNode
117832 AmbariServer
114088 AMSApplicationServer
110347 TagSynchronizer
113994 HMaster
139177 ApplicationHistoryServer
224399 HRegionServer
242636 -- process information unavailable
226128 NodeManager
110644 QuorumPeerMain
154775 TimelineReaderServer
133397 EmbeddedServer
141658 Kafka
149560 LivyServer
136280 DataNode
134079 UnixAuthenticationService
2024-10-16 20:42:02,088 INFO [master/master1:16000:becomeActiveMaster] region.RegionProcedureStore: Starting Region Procedure Store lease recovery...
2024-10-16 20:42:02,110 INFO [master/master1:16000:becomeActiveMaster] procedure2.ProcedureExecutor: Recovered RegionProcedureStore lease in 21 msec
2024-10-16 20:42:02,129 INFO [master/master1:16000:becomeActiveMaster] procedure2.ProcedureExecutor: Loaded RegionProcedureStore in 19 msec
2024-10-16 20:42:02,130 INFO [master/master1:16000:becomeActiveMaster] procedure2.RemoteProcedureDispatcher: Instantiated, coreThreads=128 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=>
2024-10-16 20:42:02,580 INFO [master/master1:16000:becomeActiveMaster] master.RegionServerTracker: Upgrading RegionServerTracker to active master mode; 0 have existingServerCrashProcedures, 0 possibly '>
2024-10-16 20:42:02,604 INFO [master/master1:16000:becomeActiveMaster] normalizer.SimpleRegionNormalizer: Updated configuration for key 'hbase.normalizer.merge.min_region_size.mb' from 0 to 1
2024-10-16 20:42:02,606 INFO [master/master1:16000:becomeActiveMaster] normalizer.RegionNormalizerWorker: Normalizer rate limit set to unlimited
2024-10-16 20:42:02,915 INFO [master/master1:16000:becomeActiveMaster] master.HMaster: Active/primary master=master1.bank.net,16000,1729107626852, sessionid=0x10000b21e3d0018, setting cluster-up flag (W>
2024-10-16 20:42:03,717 WARN [master/master1:16000:becomeActiveMaster] snapshot.SnapshotManager: Couldn't delete working snapshot directory: hdfs://master1.bank.net:8020/apps/hbase/data/.hbase-snapshot/>
2024-10-16 20:42:04,024 ERROR [master/master1:16000:becomeActiveMaster] coprocessor.CoprocessorHost: The coprocessor org.apache.atlas.hbase.hook.HBaseAtlasCoprocessor threw java.lang.ClassNotFoundExcepti>
java.lang.ClassNotFoundException: org.apache.atlas.hbase.hook.HBaseAtlasCoprocessor
at java.net.URLClassLoader.findClass(URLClassLoader.java:387)
at java.lang.ClassLoader.loadClass(ClassLoader.java:418)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:359)
at java.lang.ClassLoader.loadClass(ClassLoader.java:351)
at org.apache.hadoop.hbase.coprocessor.CoprocessorHost.loadSystemCoprocessors(CoprocessorHost.java:167)
at org.apache.hadoop.hbase.master.MasterCoprocessorHost.(MasterCoprocessorHost.java:157)
at org.apache.hadoop.hbase.master.HMaster.initializeCoprocessorHost(HMaster.java:4240)
at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:1010)
at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2405)
at org.apache.hadoop.hbase.master.HMaster.lambda$null$0(HMaster.java:565)
at org.apache.hadoop.hbase.trace.TraceUtil.trace(TraceUtil.java:187)
at org.apache.hadoop.hbase.trace.TraceUtil.trace(TraceUtil.java:177)
at org.apache.hadoop.hbase.master.HMaster.lambda$run$1(HMaster.java:562)
at java.lang.Thread.run(Thread.java:750)
2024-10-16 20:42:04,055 ERROR [master/master1:16000:becomeActiveMaster] master.HMaster: ***** ABORTING master master1.bank.net,16000,1729107626852: The coprocessor org.apache.atlas.hbase.hook.HBaseAtlasC>
java.lang.ClassNotFoundException: org.apache.atlas.hbase.hook.HBaseAtlasCoprocessor
at java.net.URLClassLoader.findClass(URLClassLoader.java:387)
at java.lang.ClassLoader.loadClass(ClassLoader.java:418)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:359)
at java.lang.ClassLoader.loadClass(ClassLoader.java:351)
at org.apache.hadoop.hbase.coprocessor.CoprocessorHost.loadSystemCoprocessors(CoprocessorHost.java:167)
at org.apache.hadoop.hbase.master.MasterCoprocessorHost.(MasterCoprocessorHost.java:157)
at org.apache.hadoop.hbase.master.HMaster.initializeCoprocessorHost(HMaster.java:4240)
at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:1010)
at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2405)
at org.apache.hadoop.hbase.master.HMaster.lambda$null$0(HMaster.java:565)
at org.apache.hadoop.hbase.trace.TraceUtil.trace(TraceUtil.java:187)
at org.apache.hadoop.hbase.trace.TraceUtil.trace(TraceUtil.java:177)
at org.apache.hadoop.hbase.master.HMaster.lambda$run$1(HMaster.java:562)
at java.lang.Thread.run(Thread.java:750)
Я попробовал большинство решений в Интернете, но они не сработали, и моя проблема не была решена.
Пожалуйста, помогите мне, это очень для меня это важно
большое спасибо
Хочу, чтобы склад работал плавно и без ошибок, а особенно чтобы HBase работал хорошо и корректно
Я перерыл все доступные методы в интернете, но моя проблема не решилась
У меня проблема с установкой Apache Ambari Я использую https://clemlab.com/ Запуск сервера метаданных Atlas< /strong> [code][Image 1] -> https://i.sstatic.net/82VBi7RT.png [Image 2] -> https://i.sstatic.net/zOjcpST5.png [Image 3] -> https://i.sstatic.net/9xrrQTKN.png [Image 4] -> https://i.sstatic.net/7aJbBKeK.png [Image 5] -> https://i.sstatic.net/nf93fPN8.png
stderr: /var/lib/ambari-agent/data/errors-997.txt
Traceback (most recent call last): File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 150, in _call_wrapper result = _call(command, **kwargs_copy) File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 314, in _call raise ExecutionFailed(err_msg, code, out, err) ExecutionFailed: Execution of 'cat /var/lib/ambari-agent/tmp/atlas_hbase_setup.rb | hbase shell -n' returned 1. SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/usr/odp/1.2.2.0-46/hadoop/lib/slf4j-reload4j-1.7.36.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/usr/odp/1.2.2.0-46/hadoop-mapreduce/lib/slf4j-reload4j-1.7.36.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/usr/odp/1.2.2.0-46/tez/lib/slf4j-reload4j-1.7.36.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/usr/odp/1.2.2.0-46/phoenix/lib/phoenix-client-hbase-2.5-5.1.3.1.2.2.0-46.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/usr/odp/1.2.2.0-46/phoenix/lib/phoenix-client-embedded-hbase-2.5-5.1.3.1.2.2.0-46.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/usr/odp/1.2.2.0-46/phoenix/lib/phoenix-client-embedded-hbase-2.5.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/usr/odp/1.2.2.0-46/phoenix/lib/phoenix-client-hbase-2.5.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/usr/odp/1.2.2.0-46/hbase/lib/client-facing-thirdparty/log4j-slf4j-impl-2.17.2.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org.slf4j.impl.Reload4jLoggerFactory] 2024-10-18 20:10:17,895 INFO [main] Configuration.deprecation: hbase.client.pause.cqtbe is deprecated. Instead, use hbase.client.pause.server.overloaded 2024-10-18 20:10:18,139 INFO [ReadOnlyZKClient-master1.bank.net:2181@0x1ed9d173] zookeeper.ZooKeeper: Client environment:zookeeper.version=3.8.3-1.2.2.0-46-5734065866741497287e0f17679df3b790dd7acc-dirty, built on 2024-01-10 09:59 UTC 2024-10-18 20:10:18,174 INFO [ReadOnlyZKClient-master1.bank.net:2181@0x1ed9d173] zookeeper.ZooKeeper: Client environment:host.name=master1.bank.net 2024-10-18 20:10:18,174 INFO [ReadOnlyZKClient-master1.bank.net:2181@0x1ed9d173] zookeeper.ZooKeeper: Client environment:java.version=1.8.0_391 2024-10-18 20:10:18,174 INFO [ReadOnlyZKClient-master1.bank.net:2181@0x1ed9d173] zookeeper.ZooKeeper: Client environment:java.vendor=Oracle Corporation 2024-10-18 20:10:18,174 INFO [ReadOnlyZKClient-master1.bank.net:2181@0x1ed9d173] zookeeper.ZooKeeper: Client environment:java.home=/usr/jdk64/jdk1.8.0_391/jre 2024-10-18 20:10:18,174 INFO [ReadOnlyZKClient-master1.bank.net:2181@0x1ed9d173] zookeeper.ZooKeeper: Client environment:java.class.path=/usr/odp/1.2.2.0-46/hbase/lib/ruby/jruby-complete-9.2.13.0.jar:/usr/odp/1.2.2.0-46/hbase/conf:/usr/jdk64/jdk1.8.0-...........................ETC......................................pherf-5.1.3.1.2.2.0-46.jar:/usr/odp/1.2.2.0-46/phoenix/lib/phoenix-hbase-compat-2.1.6-5.1.3.1.2.2.0-46.jar:/usr/odp/1.2.2.0-46/phoenix/lib/phoenix-hbase-compat-2.2.5-5.1.3.1.2.2.0-46.jar:/usr/odp/1.2.2.0-46/phoenix/lib/phoenix-tracing-webapp-5.1.3.1.2.2.0-46-sources.jar::/usr/odp/1.2.2.0-46/hbase/lib/client-facing-thirdparty/log4j-1.2-api-2.17.2.jar:/usr/odp/1.2.2.0-46/hbase/lib/client-facing-thirdparty/log4j-api-2.17.2.jar:/usr/odp/1.2.2.0-46/hbase/lib/client-facing-thirdparty/log4j-core-2.17.2.jar:/usr/odp/1.2.2.0-46/hbase/lib/client-facing-thirdparty/log4j-slf4j-impl-2.17.2.jar 2024-10-18 20:12:36,400 INFO [ReadOnlyZKClient-master1.bank.net:2181@0x05a466dd] zookeeper.ZooKeeper: Client environment:java.library.path=:/usr/odp/1.2.2.0-46/hadoop/lib/native/Linux-amd64-64:/usr/odp/1.2.2.0-46/hadoop/lib/native 2024-10-18 20:12:36,400 INFO [ReadOnlyZKClient-master1.bank.net:2181@0x05a466dd] zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/tmp 2024-10-18 20:12:36,400 INFO [ReadOnlyZKClient-master1.bank.net:2181@0x05a466dd] zookeeper.ZooKeeper: Client environment:java.compiler= 2024-10-18 20:12:36,400 INFO [ReadOnlyZKClient-master1.bank.net:2181@0x05a466dd] zookeeper.ZooKeeper: Client environment:os.name=Linux 2024-10-18 20:12:36,400 INFO [ReadOnlyZKClient-master1.bank.net:2181@0x05a466dd] zookeeper.ZooKeeper: Client environment:os.arch=amd64 2024-10-18 20:12:36,400 INFO [ReadOnlyZKClient-master1.bank.net:2181@0x05a466dd] zookeeper.ZooKeeper: Client environment:os.version=6.8.0-45-generic 2024-10-18 20:12:36,400 INFO [ReadOnlyZKClient-master1.bank.net:2181@0x05a466dd] zookeeper.ZooKeeper: Client environment:user.name=hbase 2024-10-18 20:12:36,400 INFO [ReadOnlyZKClient-master1.bank.net:2181@0x05a466dd] zookeeper.ZooKeeper: Client environment:user.home=/home/hbase 2024-10-18 20:12:36,400 INFO [ReadOnlyZKClient-master1.bank.net:2181@0x05a466dd] zookeeper.ZooKeeper: Client environment:user.dir=/home/hbase 2024-10-18 20:12:36,400 INFO [ReadOnlyZKClient-master1.bank.net:2181@0x05a466dd] zookeeper.ZooKeeper: Client environment:os.memory.free=5348MB 2024-10-18 20:12:36,400 INFO [ReadOnlyZKClient-master1.bank.net:2181@0x05a466dd] zookeeper.ZooKeeper: Client environment:os.memory.max=5978MB 2024-10-18 20:12:36,400 INFO [ReadOnlyZKClient-master1.bank.net:2181@0x05a466dd] zookeeper.ZooKeeper: Client environment:os.memory.total=5594MB 2024-10-18 20:12:36,412 INFO [ReadOnlyZKClient-master1.bank.net:2181@0x05a466dd] zookeeper.ZooKeeper: Initiating client connection, connectString=master1.bank.net:2181 sessionTimeout=90000 watcher=org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$232/400681957@28818bd5 2024-10-18 20:12:36,462 INFO [ReadOnlyZKClient-master1.bank.net:2181@0x05a466dd] common.X509Util: Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation 2024-10-18 20:12:36,486 INFO [ReadOnlyZKClient-master1.bank.net:2181@0x05a466dd] zookeeper.ClientCnxnSocket: jute.maxbuffer value is 1048575 Bytes 2024-10-18 20:12:36,492 INFO [ReadOnlyZKClient-master1.bank.net:2181@0x05a466dd] zookeeper.ClientCnxn: zookeeper.request.timeout value is 0. feature enabled=false 2024-10-18 20:12:36,707 INFO [ReadOnlyZKClient-master1.bank.net:2181@0x05a466dd-SendThread(master1.bank.net:2181)] zookeeper.ClientCnxn: Opening socket connection to server master1.bank.net/10.0.2.15:2181. 2024-10-18 20:12:36,746 INFO [ReadOnlyZKClient-master1.bank.net:2181@0x05a466dd-SendThread(master1.bank.net:2181)] zookeeper.ClientCnxn: SASL config status: Will not attempt to authenticate using SASL (unknown error) 2024-10-18 20:12:36,755 INFO [ReadOnlyZKClient-master1.bank.net:2181@0x05a466dd-SendThread(master1.bank.net:2181)] zookeeper.ClientCnxn: Socket connection established, initiating session, client: /10.0.2.15:46398, server: master1.bank.net/10.0.2.15:2181 2024-10-18 20:12:36,769 INFO [ReadOnlyZKClient-master1.bank.net:2181@0x05a466dd-SendThread(master1.bank.net:2181)] zookeeper.ClientCnxn: Session establishment complete on server master1.bank.net/10.0.2.15:2181, session id = 0x10000b21e3d0068, negotiated timeout = 60000 2024-10-18 20:13:37,421 INFO [ReadOnlyZKClient-master1.bank.net:2181@0x05a466dd-EventThread] zookeeper.ClientCnxn: EventThread shut down for session: 0x10000b21e3d0068 2024-10-18 20:13:37,421 INFO [ReadOnlyZKClient-master1.bank.net:2181@0x05a466dd] zookeeper.ZooKeeper: Session: 0x10000b21e3d0068 closed hbase:001:0> hbase:002:0> _tbl_titan = 'atlas_janus' => "atlas_janus" hbase:003:0> _tbl_audit = 'ATLAS_ENTITY_AUDIT_EVENTS' => "ATLAS_ENTITY_AUDIT_EVENTS" hbase:004:0> _usr_atlas = 'atlas' => "atlas" hbase:005:0> hbase:006:0> hbase:007:0> if not list.include? _tbl_titan begin create _tbl_titan,{NAME => 'e',DATA_BLOCK_ENCODING => 'FAST_DIFF', COMPRESSION =>'GZ', BLOOMFILTER =>'ROW'},{NAME => 'g',DATA_BLOCK_ENCODING => 'FAST_DIFF', COMPRESSION =>'GZ', BLOOMFILTER =>'ROW'},{NAME => 'i',DATA_BLOCK_ENCODING => 'FAST_DIFF', COMPRESSION =>'GZ', BLOOMFILTER =>'ROW'},{NAME => 's',DATA_BLOCK_ENCODING => 'FAST_DIFF', COMPRESSION =>'GZ', BLOOMFILTER =>'ROW'},{NAME => 'm',DATA_BLOCK_ENCODING => 'FAST_DIFF', COMPRESSION =>'GZ', BLOOMFILTER =>'ROW'},{NAME => 'l',DATA_BLOCK_ENCODING => 'FAST_DIFF', COMPRESSION =>'GZ', BLOOMFILTER =>'ROW', TTL => 604800, KEEP_DELETED_CELLS =>false} rescue RuntimeError => e raise e if not e.message.include? "Table already exists" end end TABLE 2024-10-18 20:13:46,956 INFO [ReadOnlyZKClient-master1.bank.net:2181@0x05a466dd] zookeeper.ZooKeeper: Initiating client connection, connectString=master1.bank.net:2181 sessionTimeout=90000 watcher=org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$232/400681957@28818bd5 2024-10-18 20:13:46,957 INFO [ReadOnlyZKClient-master1.bank.net:2181@0x05a466dd] zookeeper.ClientCnxnSocket: jute.maxbuffer value is 1048575 Bytes 2024-10-18 20:13:46,957 INFO [ReadOnlyZKClient-master1.bank.net:2181@0x05a466dd] zookeeper.ClientCnxn: zookeeper.request.timeout value is 0. feature enabled=false 2024-10-18 20:13:46,973 INFO [ReadOnlyZKClient-master1.bank.net:2181@0x05a466dd-SendThread(master1.bank.net:2181)] zookeeper.ClientCnxn: Opening socket connection to server master1.bank.net/10.0.2.15:2181. 2024-10-18 20:13:46,973 INFO [ReadOnlyZKClient-master1.bank.net:2181@0x05a466dd-SendThread(master1.bank.net:2181)] zookeeper.ClientCnxn: SASL config status: Will not attempt to authenticate using SASL (unknown error) 2024-10-18 20:13:46,994 INFO [ReadOnlyZKClient-master1.bank.net:2181@0x05a466dd-SendThread(master1.bank.net:2181)] zookeeper.ClientCnxn: Socket connection established, initiating session, client: /10.0.2.15:37486, server: master1.bank.net/10.0.2.15:2181 2024-10-18 20:13:47,025 INFO [ReadOnlyZKClient-master1.bank.net:2181@0x05a466dd-SendThread(master1.bank.net:2181)] zookeeper.ClientCnxn: Session establishment complete on server master1.bank.net/10.0.2.15:2181, session id = 0x10000b21e3d0069, negotiated timeout = 60000 Took 0.3817 secondsUnhandled Java exception: org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = NoNode for /hbase-unsecure/master org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = NoNode for /hbase-unsecure/master create at org/apache/zookeeper/KeeperException.java:118 create at org/apache/zookeeper/KeeperException.java:54 exec at org/apache/hadoop/hbase/zookeeper/ReadOnlyZKClient.java:174 run at org/apache/hadoop/hbase/zookeeper/ReadOnlyZKClient.java:344 run at java/lang/Thread.java:750
2024-10-16 20:42:02,088 INFO [master/master1:16000:becomeActiveMaster] region.RegionProcedureStore: Starting Region Procedure Store lease recovery... 2024-10-16 20:42:02,110 INFO [master/master1:16000:becomeActiveMaster] procedure2.ProcedureExecutor: Recovered RegionProcedureStore lease in 21 msec 2024-10-16 20:42:02,129 INFO [master/master1:16000:becomeActiveMaster] procedure2.ProcedureExecutor: Loaded RegionProcedureStore in 19 msec 2024-10-16 20:42:02,130 INFO [master/master1:16000:becomeActiveMaster] procedure2.RemoteProcedureDispatcher: Instantiated, coreThreads=128 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=> 2024-10-16 20:42:02,580 INFO [master/master1:16000:becomeActiveMaster] master.RegionServerTracker: Upgrading RegionServerTracker to active master mode; 0 have existingServerCrashProcedures, 0 possibly '> 2024-10-16 20:42:02,604 INFO [master/master1:16000:becomeActiveMaster] normalizer.SimpleRegionNormalizer: Updated configuration for key 'hbase.normalizer.merge.min_region_size.mb' from 0 to 1 2024-10-16 20:42:02,606 INFO [master/master1:16000:becomeActiveMaster] normalizer.RegionNormalizerWorker: Normalizer rate limit set to unlimited 2024-10-16 20:42:02,915 INFO [master/master1:16000:becomeActiveMaster] master.HMaster: Active/primary master=master1.bank.net,16000,1729107626852, sessionid=0x10000b21e3d0018, setting cluster-up flag (W> 2024-10-16 20:42:03,717 WARN [master/master1:16000:becomeActiveMaster] snapshot.SnapshotManager: Couldn't delete working snapshot directory: hdfs://master1.bank.net:8020/apps/hbase/data/.hbase-snapshot/> 2024-10-16 20:42:04,024 ERROR [master/master1:16000:becomeActiveMaster] coprocessor.CoprocessorHost: The coprocessor org.apache.atlas.hbase.hook.HBaseAtlasCoprocessor threw java.lang.ClassNotFoundExcepti> java.lang.ClassNotFoundException: org.apache.atlas.hbase.hook.HBaseAtlasCoprocessor at java.net.URLClassLoader.findClass(URLClassLoader.java:387) at java.lang.ClassLoader.loadClass(ClassLoader.java:418) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:359) at java.lang.ClassLoader.loadClass(ClassLoader.java:351) at org.apache.hadoop.hbase.coprocessor.CoprocessorHost.loadSystemCoprocessors(CoprocessorHost.java:167) at org.apache.hadoop.hbase.master.MasterCoprocessorHost.(MasterCoprocessorHost.java:157) at org.apache.hadoop.hbase.master.HMaster.initializeCoprocessorHost(HMaster.java:4240) at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:1010) at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2405) at org.apache.hadoop.hbase.master.HMaster.lambda$null$0(HMaster.java:565) at org.apache.hadoop.hbase.trace.TraceUtil.trace(TraceUtil.java:187) at org.apache.hadoop.hbase.trace.TraceUtil.trace(TraceUtil.java:177) at org.apache.hadoop.hbase.master.HMaster.lambda$run$1(HMaster.java:562) at java.lang.Thread.run(Thread.java:750) 2024-10-16 20:42:04,055 ERROR [master/master1:16000:becomeActiveMaster] master.HMaster: ***** ABORTING master master1.bank.net,16000,1729107626852: The coprocessor org.apache.atlas.hbase.hook.HBaseAtlasC> java.lang.ClassNotFoundException: org.apache.atlas.hbase.hook.HBaseAtlasCoprocessor at java.net.URLClassLoader.findClass(URLClassLoader.java:387) at java.lang.ClassLoader.loadClass(ClassLoader.java:418) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:359) at java.lang.ClassLoader.loadClass(ClassLoader.java:351) at org.apache.hadoop.hbase.coprocessor.CoprocessorHost.loadSystemCoprocessors(CoprocessorHost.java:167) at org.apache.hadoop.hbase.master.MasterCoprocessorHost.(MasterCoprocessorHost.java:157) at org.apache.hadoop.hbase.master.HMaster.initializeCoprocessorHost(HMaster.java:4240) at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:1010) at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2405) at org.apache.hadoop.hbase.master.HMaster.lambda$null$0(HMaster.java:565) at org.apache.hadoop.hbase.trace.TraceUtil.trace(TraceUtil.java:187) at org.apache.hadoop.hbase.trace.TraceUtil.trace(TraceUtil.java:177) at org.apache.hadoop.hbase.master.HMaster.lambda$run$1(HMaster.java:562) at java.lang.Thread.run(Thread.java:750) [/code] Я попробовал большинство решений в Интернете, но они не сработали, и моя проблема не была решена. Пожалуйста, помогите мне, это очень для меня это важно большое спасибо Хочу, чтобы склад работал плавно и без ошибок, а особенно чтобы HBase работал хорошо и корректно Я перерыл все доступные методы в интернете, но моя проблема не решилась
Я работаю над вариантом использования, в котором мне нужно прочитать данные, созданные из таблицы Phoenix, с использованием Hbase API . Однако при чтении данных из Hbase API я получаю данные в закодированной форме.
Все имена столбцов имеют формат...
Когда я пытаюсь запустить ./start-hbase.cmd, я получаю следующую ошибку:
PS C:\hbasesetup\hbase-2.6.1\bin> ./start- hbase.cmd
Нераспознанная опция виртуальной машины «UseConcMarkSweepGC»
Ошибка: не удалось создать виртуальную машину Java.
Ошибка:...
Когда я пытаюсь запустить ./start-hbase.cmd, я получаю следующую ошибку:
PS C:\hbasesetup\hbase-2.6.1\bin> ./start- hbase.cmd
Нераспознанная опция виртуальной машины «UseConcMarkSweepGC»
Ошибка: не удалось создать виртуальную машину Java.
Ошибка:...