Hyperledger

본문 바로가기
사이트 내 전체검색


Hyperledger
Hyperledger

5. Hyperledger Fabric 네트워크 구축하기 (2)

페이지 정보

작성자 관리자 댓글 0건 조회 3,233회 작성일 19-08-27 22:42

본문

5. Hyperledger Fabric 네트워크 구축하기 (2)

32.PNG 

33.PNG 

34.PNG 

35.PNG 

 

36.PNG

root@kafka-zookeeper:~# mv /home/leejinkwan/testnet.tar.gz ./
root@kafka-zookeeper:~# tar xvfz testnet.tar.gz
root@kafka-zookeeper:~# cd testnet/ 

root@kafka-zookeeper:~/testnet# vi docker-compose.yaml
version: '2'
services:
    zookeeper:
        image: hyperledger/fabric-zookeeper
#        restart: always
        ports:
            - "2181:2181"
    kafka0:
        image: hyperledger/fabric-kafka
#        restart: always
        environment:
            - KAFKA_ADVERTISED_HOST_NAME=kafka-zookeeper
            - KAFKA_ADVERTISED_PORT=9092
            - KAFKA_BROKER_ID=0
            - KAFKA_MESSAGE_MAX_BYTES=103809024 # 99 * 1024 * 1024 B
            - KAFKA_REPLICA_FETCH_MAX_BYTES=103809024 # 99 * 1024 * 1024 B
            - KAFKA_UNCLEAN_LEADER_ELECTION_ENABLE=false
            - KAFKA_NUM_REPLICA_FETCHERS=1
            - KAFKA_DEFAULT_REPLICATION_FACTOR=1
            - KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181
        ports:
            - "9092:9092"
        depends_on:
            - zookeeper 

root@kafka-zookeeper:~/testnet# docker-compose up
Creating network "testnet_default" with the default driver
Creating testnet_zookeeper_1 ... done
Creating testnet_kafka0_1    ... done
Attaching to testnet_zookeeper_1, testnet_kafka0_1
zookeeper_1  | ZooKeeper JMX enabled by default
zookeeper_1  | Using config: /conf/zoo.cfg
zookeeper_1  | 2019-08-27 13:47:41,955 [myid:] - INFO  [main:QuorumPeerConfig@124] - Reading configuration from: /conf/zoo.cfg
zookeeper_1  | 2019-08-27 13:47:41,971 [myid:] - INFO  [main:DatadirCleanupManager@78] - autopurge.snapRetainCount set to 3
zookeeper_1  | 2019-08-27 13:47:41,972 [myid:] - INFO  [main:DatadirCleanupManager@79] - autopurge.purgeInterval set to 1
zookeeper_1  | 2019-08-27 13:47:41,976 [myid:] - WARN  [main:QuorumPeerMain@113] - Either no config or no quorum defined in config, running  in standalone mode
zookeeper_1  | 2019-08-27 13:47:41,978 [myid:] - INFO  [PurgeTask:DatadirCleanupManager$PurgeTask@138] - Purge task started.
zookeeper_1  | 2019-08-27 13:47:42,085 [myid:] - INFO  [PurgeTask:DatadirCleanupManager$PurgeTask@144] - Purge task completed.
zookeeper_1  | 2019-08-27 13:47:42,103 [myid:] - INFO  [main:QuorumPeerConfig@124] - Reading configuration from: /conf/zoo.cfg
zookeeper_1  | 2019-08-27 13:47:42,104 [myid:] - INFO  [main:ZooKeeperServerMain@96] - Starting server
zookeeper_1  | 2019-08-27 13:47:42,141 [myid:] - INFO  [main:Environment@100] - Server environment:zookeeper.version=3.4.9-1757313, built on 08/23/2016 06:50 GMT
zookeeper_1  | 2019-08-27 13:47:42,141 [myid:] - INFO  [main:Environment@100] - Server environment:host.name=bffcb568c9a7
zookeeper_1  | 2019-08-27 13:47:42,141 [myid:] - INFO  [main:Environment@100] - Server environment:java.version=1.8.0_191
zookeeper_1  | 2019-08-27 13:47:42,142 [myid:] - INFO  [main:Environment@100] - Server environment:java.vendor=Oracle Corporation
zookeeper_1  | 2019-08-27 13:47:42,142 [myid:] - INFO  [main:Environment@100] - Server environment:java.home=/usr/lib/jvm/java-8-openjdk-amd64/jre
zookeeper_1  | 2019-08-27 13:47:42,142 [myid:] - INFO  [main:Environment@100] - Server environment:java.class.path=/zookeeper-3.4.9/bin/../build/classes:/zookeeper-3.4.9/bin/../build/lib/*.jar:/zookeeper-3.4.9/bin/../lib/slf4j-log4j12-1.6.1.jar:/zookeeper-3.4.9/bin/../lib/slf4j-api-1.6.1.jar:/zookeeper-3.4.9/bin/../lib/netty-3.10.5.Final.jar:/zookeeper-3.4.9/bin/../lib/log4j-1.2.16.jar:/zookeeper-3.4.9/bin/../lib/jline-0.9.94.jar:/zookeeper-3.4.9/bin/../zookeeper-3.4.9.jar:/zookeeper-3.4.9/bin/../src/java/lib/*.jar:/conf:
zookeeper_1  | 2019-08-27 13:47:42,142 [myid:] - INFO  [main:Environment@100] - Server environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib/x86_64-linux-gnu/jni:/lib/x86_64-linux-gnu:/usr/lib/x86_64-linux-gnu:/usr/lib/jni:/lib:/usr/lib
zookeeper_1  | 2019-08-27 13:47:42,142 [myid:] - INFO  [main:Environment@100] - Server environment:java.io.tmpdir=/tmp
zookeeper_1  | 2019-08-27 13:47:42,156 [myid:] - INFO  [main:Environment@100] - Server environment:java.compiler=<NA>
zookeeper_1  | 2019-08-27 13:47:42,157 [myid:] - INFO  [main:Environment@100] - Server environment:os.name=Linux
zookeeper_1  | 2019-08-27 13:47:42,157 [myid:] - INFO  [main:Environment@100] - Server environment:os.arch=amd64
zookeeper_1  | 2019-08-27 13:47:42,157 [myid:] - INFO  [main:Environment@100] - Server environment:os.version=4.15.0-58-generic
zookeeper_1  | 2019-08-27 13:47:42,160 [myid:] - INFO  [main:Environment@100] - Server environment:user.name=zookeeper
zookeeper_1  | 2019-08-27 13:47:42,160 [myid:] - INFO  [main:Environment@100] - Server environment:user.home=/home/zookeeper
zookeeper_1  | 2019-08-27 13:47:42,160 [myid:] - INFO  [main:Environment@100] - Server environment:user.dir=/zookeeper-3.4.9
zookeeper_1  | 2019-08-27 13:47:42,179 [myid:] - INFO  [main:ZooKeeperServer@815] - tickTime set to 2000
zookeeper_1  | 2019-08-27 13:47:42,179 [myid:] - INFO  [main:ZooKeeperServer@824] - minSessionTimeout set to -1
zookeeper_1  | 2019-08-27 13:47:42,179 [myid:] - INFO  [main:ZooKeeperServer@833] - maxSessionTimeout set to -1
zookeeper_1  | 2019-08-27 13:47:42,223 [myid:] - INFO  [main:NIOServerCnxnFactory@89] - binding to port 0.0.0.0/0.0.0.0:2181
kafka0_1     | [2019-08-27 13:47:46,287] INFO KafkaConfig values:
kafka0_1     |  advertised.host.name = kafka-zookeeper
kafka0_1     |  advertised.listeners = null
kafka0_1     |  advertised.port = 9092
kafka0_1     |  alter.config.policy.class.name = null
kafka0_1     |  authorizer.class.name =
kafka0_1     |  auto.create.topics.enable = true
kafka0_1     |  auto.leader.rebalance.enable = true
kafka0_1     |  background.threads = 10
kafka0_1     |  broker.id = 0
kafka0_1     |  broker.id.generation.enable = true
kafka0_1     |  broker.rack = null
kafka0_1     |  compression.type = producer
kafka0_1     |  connections.max.idle.ms = 600000
kafka0_1     |  controlled.shutdown.enable = true
kafka0_1     |  controlled.shutdown.max.retries = 3
kafka0_1     |  controlled.shutdown.retry.backoff.ms = 5000
kafka0_1     |  controller.socket.timeout.ms = 30000
kafka0_1     |  create.topic.policy.class.name = null
kafka0_1     |  default.replication.factor = 1
kafka0_1     |  delete.records.purgatory.purge.interval.requests = 1
kafka0_1     |  delete.topic.enable = true
kafka0_1     |  fetch.purgatory.purge.interval.requests = 1000
kafka0_1     |  group.initial.rebalance.delay.ms = 0
kafka0_1     |  group.max.session.timeout.ms = 300000
kafka0_1     |  group.min.session.timeout.ms = 6000
kafka0_1     |  host.name =
kafka0_1     |  inter.broker.listener.name = null
kafka0_1     |  inter.broker.protocol.version = 1.0-IV0
kafka0_1     |  leader.imbalance.check.interval.seconds = 300
kafka0_1     |  leader.imbalance.per.broker.percentage = 10
kafka0_1     |  listener.security.protocol.map = PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
kafka0_1     |  listeners = null
kafka0_1     |  log.cleaner.backoff.ms = 15000
kafka0_1     |  log.cleaner.dedupe.buffer.size = 134217728
kafka0_1     |  log.cleaner.delete.retention.ms = 86400000
kafka0_1     |  log.cleaner.enable = true
kafka0_1     |  log.cleaner.io.buffer.load.factor = 0.9
kafka0_1     |  log.cleaner.io.buffer.size = 524288
kafka0_1     |  log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
kafka0_1     |  log.cleaner.min.cleanable.ratio = 0.5
kafka0_1     |  log.cleaner.min.compaction.lag.ms = 0
kafka0_1     |  log.cleaner.threads = 1
kafka0_1     |  log.cleanup.policy = [delete]
kafka0_1     |  log.dir = /tmp/kafka-logs
kafka0_1     |  log.dirs = /tmp/kafka-logs
kafka0_1     |  log.flush.interval.messages = 9223372036854775807
kafka0_1     |  log.flush.interval.ms = null
kafka0_1     |  log.flush.offset.checkpoint.interval.ms = 60000
kafka0_1     |  log.flush.scheduler.interval.ms = 9223372036854775807
kafka0_1     |  log.flush.start.offset.checkpoint.interval.ms = 60000
kafka0_1     |  log.index.interval.bytes = 4096
kafka0_1     |  log.index.size.max.bytes = 10485760
kafka0_1     |  log.message.format.version = 1.0-IV0
kafka0_1     |  log.message.timestamp.difference.max.ms = 9223372036854775807
kafka0_1     |  log.message.timestamp.type = CreateTime
kafka0_1     |  log.preallocate = false
kafka0_1     |  log.retention.bytes = -1
kafka0_1     |  log.retention.check.interval.ms = 300000
kafka0_1     |  log.retention.hours = 168
kafka0_1     |  log.retention.minutes = null
kafka0_1     |  log.retention.ms = -1
kafka0_1     |  log.roll.hours = 168
kafka0_1     |  log.roll.jitter.hours = 0
kafka0_1     |  log.roll.jitter.ms = null
kafka0_1     |  log.roll.ms = null
kafka0_1     |  log.segment.bytes = 1073741824
kafka0_1     |  log.segment.delete.delay.ms = 60000
kafka0_1     |  max.connections.per.ip = 2147483647
kafka0_1     |  max.connections.per.ip.overrides =
kafka0_1     |  message.max.bytes = 103809024
kafka0_1     |  metric.reporters = []
kafka0_1     |  metrics.num.samples = 2
kafka0_1     |  metrics.recording.level = INFO
kafka0_1     |  metrics.sample.window.ms = 30000
kafka0_1     |  min.insync.replicas = 1
kafka0_1     |  num.io.threads = 8
kafka0_1     |  num.network.threads = 3
kafka0_1     |  num.partitions = 1
kafka0_1     |  num.recovery.threads.per.data.dir = 1
kafka0_1     |  num.replica.fetchers = 1
kafka0_1     |  offset.metadata.max.bytes = 4096
kafka0_1     |  offsets.commit.required.acks = -1
kafka0_1     |  offsets.commit.timeout.ms = 5000
kafka0_1     |  offsets.load.buffer.size = 5242880
kafka0_1     |  offsets.retention.check.interval.ms = 600000
kafka0_1     |  offsets.retention.minutes = 1440
kafka0_1     |  offsets.topic.compression.codec = 0
kafka0_1     |  offsets.topic.num.partitions = 50
kafka0_1     |  offsets.topic.replication.factor = 1
kafka0_1     |  offsets.topic.segment.bytes = 104857600
kafka0_1     |  port = 9092
kafka0_1     |  principal.builder.class = null
kafka0_1     |  producer.purgatory.purge.interval.requests = 1000
kafka0_1     |  queued.max.request.bytes = -1
kafka0_1     |  queued.max.requests = 500
kafka0_1     |  quota.consumer.default = 9223372036854775807
kafka0_1     |  quota.producer.default = 9223372036854775807
kafka0_1     |  quota.window.num = 11
kafka0_1     |  quota.window.size.seconds = 1
kafka0_1     |  replica.fetch.backoff.ms = 1000
kafka0_1     |  replica.fetch.max.bytes = 103809024
kafka0_1     |  replica.fetch.min.bytes = 1
kafka0_1     |  replica.fetch.response.max.bytes = 10485760
kafka0_1     |  replica.fetch.wait.max.ms = 500
kafka0_1     |  replica.high.watermark.checkpoint.interval.ms = 5000
kafka0_1     |  replica.lag.time.max.ms = 10000
kafka0_1     |  replica.socket.receive.buffer.bytes = 65536
kafka0_1     |  replica.socket.timeout.ms = 30000
kafka0_1     |  replication.quota.window.num = 11
kafka0_1     |  replication.quota.window.size.seconds = 1
kafka0_1     |  request.timeout.ms = 30000
kafka0_1     |  reserved.broker.max.id = 1000
kafka0_1     |  sasl.enabled.mechanisms = [GSSAPI]
kafka0_1     |  sasl.kerberos.kinit.cmd = /usr/bin/kinit
kafka0_1     |  sasl.kerberos.min.time.before.relogin = 60000
kafka0_1     |  sasl.kerberos.principal.to.local.rules = [DEFAULT]
kafka0_1     |  sasl.kerberos.service.name = null
kafka0_1     |  sasl.kerberos.ticket.renew.jitter = 0.05
kafka0_1     |  sasl.kerberos.ticket.renew.window.factor = 0.8
kafka0_1     |  sasl.mechanism.inter.broker.protocol = GSSAPI
kafka0_1     |  security.inter.broker.protocol = PLAINTEXT
kafka0_1     |  socket.receive.buffer.bytes = 102400
kafka0_1     |  socket.request.max.bytes = 104857600
kafka0_1     |  socket.send.buffer.bytes = 102400
kafka0_1     |  ssl.cipher.suites = null
kafka0_1     |  ssl.client.auth = none
kafka0_1     |  ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
kafka0_1     |  ssl.endpoint.identification.algorithm = null
kafka0_1     |  ssl.key.password = null
kafka0_1     |  ssl.keymanager.algorithm = SunX509
kafka0_1     |  ssl.keystore.location = null
kafka0_1     |  ssl.keystore.password = null
kafka0_1     |  ssl.keystore.type = JKS
kafka0_1     |  ssl.protocol = TLS
kafka0_1     |  ssl.provider = null
kafka0_1     |  ssl.secure.random.implementation = null
kafka0_1     |  ssl.trustmanager.algorithm = PKIX
kafka0_1     |  ssl.truststore.location = null
kafka0_1     |  ssl.truststore.password = null
kafka0_1     |  ssl.truststore.type = JKS
kafka0_1     |  transaction.abort.timed.out.transaction.cleanup.interval.ms = 60000
kafka0_1     |  transaction.max.timeout.ms = 900000
kafka0_1     |  transaction.remove.expired.transaction.cleanup.interval.ms = 3600000
kafka0_1     |  transaction.state.log.load.buffer.size = 5242880
kafka0_1     |  transaction.state.log.min.isr = 1
kafka0_1     |  transaction.state.log.num.partitions = 50
kafka0_1     |  transaction.state.log.replication.factor = 1
kafka0_1     |  transaction.state.log.segment.bytes = 104857600
kafka0_1     |  transactional.id.expiration.ms = 604800000
kafka0_1     |  unclean.leader.election.enable = false
kafka0_1     |  zookeeper.connect = zookeeper:2181
kafka0_1     |  zookeeper.connection.timeout.ms = 6000
kafka0_1     |  zookeeper.session.timeout.ms = 6000
kafka0_1     |  zookeeper.set.acl = false
kafka0_1     |  zookeeper.sync.time.ms = 2000
kafka0_1     |  (kafka.server.KafkaConfig)
kafka0_1     | [2019-08-27 13:47:46,597] INFO starting (kafka.server.KafkaServer)
kafka0_1     | [2019-08-27 13:47:46,607] INFO Connecting to zookeeper on zookeeper:2181 (kafka.server.KafkaServer)
kafka0_1     | [2019-08-27 13:47:46,660] INFO Starting ZkClient event thread. (org.I0Itec.zkclient.ZkEventThread)
kafka0_1     | [2019-08-27 13:47:46,667] INFO Client environment:zookeeper.version=3.4.10-39d3a4f269333c922ed3db283be479f9deacaa0f, built on 03/23/2017 10:13 GMT (org.apache.zookeeper.ZooKeeper)
kafka0_1     | [2019-08-27 13:47:46,676] INFO Client environment:host.name=f506b18a190b (org.apache.zookeeper.ZooKeeper)
kafka0_1     | [2019-08-27 13:47:46,678] INFO Client environment:java.version=1.8.0_191 (org.apache.zookeeper.ZooKeeper)
kafka0_1     | [2019-08-27 13:47:46,678] INFO Client environment:java.vendor=Oracle Corporation (org.apache.zookeeper.ZooKeeper)
kafka0_1     | [2019-08-27 13:47:46,678] INFO Client environment:java.home=/usr/lib/jvm/java-8-openjdk-amd64/jre (org.apache.zookeeper.ZooKeeper)
kafka0_1     | [2019-08-27 13:47:46,678] INFO Client environment:java.class.path=:/opt/kafka/bin/../libs/aopalliance-repackaged-2.5.0-b32.jar:/opt/kafka/bin/../libs/argparse4j-0.7.0.jar:/opt/kafka/bin/../libs/commons-lang3-3.5.jar:/opt/kafka/bin/../libs/connect-api-1.0.0.jar:/opt/kafka/bin/../libs/connect-file-1.0.0.jar:/opt/kafka/bin/../libs/connect-json-1.0.0.jar:/opt/kafka/bin/../libs/connect-runtime-1.0.0.jar:/opt/kafka/bin/../libs/connect-transforms-1.0.0.jar:/opt/kafka/bin/../libs/guava-20.0.jar:/opt/kafka/bin/../libs/hk2-api-2.5.0-b32.jar:/opt/kafka/bin/../libs/hk2-locator-2.5.0-b32.jar:/opt/kafka/bin/../libs/hk2-utils-2.5.0-b32.jar:/opt/kafka/bin/../libs/jackson-annotations-2.9.1.jar:/opt/kafka/bin/../libs/jackson-core-2.9.1.jar:/opt/kafka/bin/../libs/jackson-databind-2.9.1.jar:/opt/kafka/bin/../libs/jackson-jaxrs-base-2.9.1.jar:/opt/kafka/bin/../libs/jackson-jaxrs-json-provider-2.9.1.jar:/opt/kafka/bin/../libs/jackson-module-jaxb-annotations-2.9.1.jar:/opt/kafka/bin/../libs/javassist-3.20.0-GA.jar:/opt/kafka/bin/../libs/javassist-3.21.0-GA.jar:/opt/kafka/bin/../libs/javax.annotation-api-1.2.jar:/opt/kafka/bin/../libs/javax.inject-1.jar:/opt/kafka/bin/../libs/javax.inject-2.5.0-b32.jar:/opt/kafka/bin/../libs/javax.servlet-api-3.1.0.jar:/opt/kafka/bin/../libs/javax.ws.rs-api-2.0.1.jar:/opt/kafka/bin/../libs/jersey-client-2.25.1.jar:/opt/kafka/bin/../libs/jersey-common-2.25.1.jar:/opt/kafka/bin/../libs/jersey-container-servlet-2.25.1.jar:/opt/kafka/bin/../libs/jersey-container-servlet-core-2.25.1.jar:/opt/kafka/bin/../libs/jersey-guava-2.25.1.jar:/opt/kafka/bin/../libs/jersey-media-jaxb-2.25.1.jar:/opt/kafka/bin/../libs/jersey-server-2.25.1.jar:/opt/kafka/bin/../libs/jetty-continuation-9.2.22.v20170606.jar:/opt/kafka/bin/../libs/jetty-http-9.2.22.v20170606.jar:/opt/kafka/bin/../libs/jetty-io-9.2.22.v20170606.jar:/opt/kafka/bin/../libs/jetty-security-9.2.22.v20170606.jar:/opt/kafka/bin/../libs/jetty-server-9.2.22.v20170606.jar:/opt/kafka/bin/../libs/jetty-servlet-9.2.22.v20170606.jar:/opt/kafka/bin/../libs/jetty-servlets-9.2.22.v20170606.jar:/opt/kafka/bin/../libs/jetty-util-9.2.22.v20170606.jar:/opt/kafka/bin/../libs/jopt-simple-5.0.4.jar:/opt/kafka/bin/../libs/kafka-clients-1.0.0.jar:/opt/kafka/bin/../libs/kafka-log4j-appender-1.0.0.jar:/opt/kafka/bin/../libs/kafka-streams-1.0.0.jar:/opt/kafka/bin/../libs/kafka-streams-examples-1.0.0.jar:/opt/kafka/bin/../libs/kafka-tools-1.0.0.jar:/opt/kafka/bin/../libs/kafka_2.11-1.0.0-sources.jar:/opt/kafka/bin/../libs/kafka_2.11-1.0.0-test-sources.jar:/opt/kafka/bin/../libs/kafka_2.11-1.0.0.jar:/opt/kafka/bin/../libs/log4j-1.2.17.jar:/opt/kafka/bin/../libs/lz4-java-1.4.jar:/opt/kafka/bin/../libs/maven-artifact-3.5.0.jar:/opt/kafka/bin/../libs/metrics-core-2.2.0.jar:/opt/kafka/bin/../libs/osgi-resource-locator-1.0.1.jar:/opt/kafka/bin/../libs/plexus-utils-3.0.24.jar:/opt/kafka/bin/../libs/reflections-0.9.11.jar:/opt/kafka/bin/../libs/rocksdbjni-5.7.3.jar:/opt/kafka/bin/../libs/scala-library-2.11.11.jar:/opt/kafka/bin/../libs/slf4j-api-1.7.25.jar:/opt/kafka/bin/../libs/slf4j-log4j12-1.7.25.jar:/opt/kafka/bin/../libs/snappy-java-1.1.4.jar:/opt/kafka/bin/../libs/validation-api-1.1.0.Final.jar:/opt/kafka/bin/../libs/zkclient-0.10.jar:/opt/kafka/bin/../libs/zookeeper-3.4.10.jar (org.apache.zookeeper.ZooKeeper)
kafka0_1     | [2019-08-27 13:47:46,682] INFO Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib/x86_64-linux-gnu/jni:/lib/x86_64-linux-gnu:/usr/lib/x86_64-linux-gnu:/usr/lib/jni:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper)
kafka0_1     | [2019-08-27 13:47:46,683] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper)
kafka0_1     | [2019-08-27 13:47:46,683] INFO Client environment:java.compiler=<NA> (org.apache.zookeeper.ZooKeeper)
kafka0_1     | [2019-08-27 13:47:46,684] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper)
kafka0_1     | [2019-08-27 13:47:46,684] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper)
kafka0_1     | [2019-08-27 13:47:46,684] INFO Client environment:os.version=4.15.0-58-generic (org.apache.zookeeper.ZooKeeper)
kafka0_1     | [2019-08-27 13:47:46,684] INFO Client environment:user.name=root (org.apache.zookeeper.ZooKeeper)
kafka0_1     | [2019-08-27 13:47:46,684] INFO Client environment:user.home=/root (org.apache.zookeeper.ZooKeeper)
kafka0_1     | [2019-08-27 13:47:46,684] INFO Client environment:user.dir=/ (org.apache.zookeeper.ZooKeeper)
kafka0_1     | [2019-08-27 13:47:46,688] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=6000 watcher=org.I0Itec.zkclient.ZkClient@4466af20 (org.apache.zookeeper.ZooKeeper)
kafka0_1     | [2019-08-27 13:47:46,749] INFO Waiting for keeper state SyncConnected (org.I0Itec.zkclient.ZkClient)
kafka0_1     | [2019-08-27 13:47:46,751] INFO Opening socket connection to server testnet_zookeeper_1.testnet_default/172.18.0.2:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
zookeeper_1  | 2019-08-27 13:47:46,763 [myid:] - INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory@192] - Accepted socket connection from /172.18.0.3:43786
kafka0_1     | [2019-08-27 13:47:46,767] INFO Socket connection established to testnet_zookeeper_1.testnet_default/172.18.0.2:2181, initiating session (org.apache.zookeeper.ClientCnxn)
zookeeper_1  | 2019-08-27 13:47:46,789 [myid:] - INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer@928] - Client attempting to establish new session at /172.18.0.3:43786
zookeeper_1  | 2019-08-27 13:47:46,797 [myid:] - INFO  [SyncThread:0:FileTxnLog@203] - Creating new log file: log.1
zookeeper_1  | 2019-08-27 13:47:46,836 [myid:] - INFO  [SyncThread:0:ZooKeeperServer@673] - Established session 0x16cd3550d4f0000 with negotiated timeout 6000 for client /172.18.0.3:43786
kafka0_1     | [2019-08-27 13:47:46,835] INFO Session establishment complete on server testnet_zookeeper_1.testnet_default/172.18.0.2:2181, sessionid = 0x16cd3550d4f0000, negotiated timeout = 6000 (org.apache.zookeeper.ClientCnxn)
kafka0_1     | [2019-08-27 13:47:46,841] INFO zookeeper state changed (SyncConnected) (org.I0Itec.zkclient.ZkClient)
zookeeper_1  | 2019-08-27 13:47:46,968 [myid:] - INFO  [ProcessThread(sid:0 cport:2181)::PrepRequestProcessor@649] - Got user-level KeeperException when processing sessionid:0x16cd3550d4f0000 type:create cxid:0x5 zxid:0x3 txntype:-1 reqpath:n/a Error Path:/brokers Error:KeeperErrorCode = NoNode for /brokers
zookeeper_1  | 2019-08-27 13:47:46,995 [myid:] - INFO  [ProcessThread(sid:0 cport:2181)::PrepRequestProcessor@649] - Got user-level KeeperException when processing sessionid:0x16cd3550d4f0000 type:create cxid:0xb zxid:0x7 txntype:-1 reqpath:n/a Error Path:/config Error:KeeperErrorCode = NoNode for /config
zookeeper_1  | 2019-08-27 13:47:47,017 [myid:] - INFO  [ProcessThread(sid:0 cport:2181)::PrepRequestProcessor@649] - Got user-level KeeperException when processing sessionid:0x16cd3550d4f0000 type:create cxid:0x13 zxid:0xc txntype:-1 reqpath:n/a Error Path:/admin Error:KeeperErrorCode = NoNode for /admin
zookeeper_1  | 2019-08-27 13:47:47,376 [myid:] - INFO  [ProcessThread(sid:0 cport:2181)::PrepRequestProcessor@649] - Got user-level KeeperException when processing sessionid:0x16cd3550d4f0000 type:create cxid:0x1f zxid:0x13 txntype:-1 reqpath:n/a Error Path:/cluster Error:KeeperErrorCode = NoNode for /cluster
kafka0_1     | [2019-08-27 13:47:47,386] INFO Cluster ID = pzTGK-iKQGC666VTWrxn4Q (kafka.server.KafkaServer)
kafka0_1     | [2019-08-27 13:47:47,399] WARN No meta.properties file under dir /tmp/kafka-logs/meta.properties (kafka.server.BrokerMetadataCheckpoint)
kafka0_1     | [2019-08-27 13:47:47,474] INFO [ThrottledRequestReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledRequestReaper)
kafka0_1     | [2019-08-27 13:47:47,482] INFO [ThrottledRequestReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledRequestReaper)
kafka0_1     | [2019-08-27 13:47:47,488] INFO [ThrottledRequestReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledRequestReaper)
kafka0_1     | [2019-08-27 13:47:47,579] INFO Log directory '/tmp/kafka-logs' not found, creating it. (kafka.log.LogManager)
kafka0_1     | [2019-08-27 13:47:47,603] INFO Loading logs. (kafka.log.LogManager)
kafka0_1     | [2019-08-27 13:47:47,629] INFO Logs loading complete in 21 ms. (kafka.log.LogManager)
kafka0_1     | [2019-08-27 13:47:47,830] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager)
kafka0_1     | [2019-08-27 13:47:47,836] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager)
kafka0_1     | [2019-08-27 13:47:48,868] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.Acceptor)
kafka0_1     | [2019-08-27 13:47:48,876] INFO [SocketServer brokerId=0] Started 1 acceptor threads (kafka.network.SocketServer)
kafka0_1     | [2019-08-27 13:47:48,909] INFO [ExpirationReaper-0-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
kafka0_1     | [2019-08-27 13:47:48,913] INFO [ExpirationReaper-0-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
kafka0_1     | [2019-08-27 13:47:48,920] INFO [ExpirationReaper-0-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
kafka0_1     | [2019-08-27 13:47:49,019] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler)
kafka0_1     | [2019-08-27 13:47:49,148] INFO [ExpirationReaper-0-topic]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
kafka0_1     | [2019-08-27 13:47:49,159] INFO Creating /controller (is it secure? false) (kafka.utils.ZKCheckedEphemeral)
kafka0_1     | [2019-08-27 13:47:49,180] INFO [ExpirationReaper-0-Heartbeat]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
kafka0_1     | [2019-08-27 13:47:49,186] INFO [ExpirationReaper-0-Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
kafka0_1     | [2019-08-27 13:47:49,204] INFO Result of znode creation is: OK (kafka.utils.ZKCheckedEphemeral)
zookeeper_1  | 2019-08-27 13:47:49,228 [myid:] - INFO  [ProcessThread(sid:0 cport:2181)::PrepRequestProcessor@649] - Got user-level KeeperException when processing sessionid:0x16cd3550d4f0000 type:setData cxid:0x29 zxid:0x17 txntype:-1 reqpath:n/a Error Path:/controller_epoch Error:KeeperErrorCode = NoNode for /controller_epoch
kafka0_1     | [2019-08-27 13:47:49,298] INFO [GroupCoordinator 0]: Starting up. (kafka.coordinator.group.GroupCoordinator)
kafka0_1     | [2019-08-27 13:47:49,317] INFO [GroupCoordinator 0]: Startup complete. (kafka.coordinator.group.GroupCoordinator)
kafka0_1     | [2019-08-27 13:47:49,329] INFO [GroupMetadataManager brokerId=0] Removed 0 expired offsets in 10 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
kafka0_1     | [2019-08-27 13:47:49,385] INFO [ProducerId Manager 0]: Acquired new producerId block (brokerId:0,blockStartProducerId:0,blockEndProducerId:999) by writing to Zk with path version 1 (kafka.coordinator.transaction.ProducerIdManager)
kafka0_1     | [2019-08-27 13:47:49,488] INFO [TransactionCoordinator id=0] Starting up. (kafka.coordinator.transaction.TransactionCoordinator)
kafka0_1     | [2019-08-27 13:47:49,497] INFO [TransactionCoordinator id=0] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator)
kafka0_1     | [2019-08-27 13:47:49,510] INFO [Transaction Marker Channel Manager 0]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager)
zookeeper_1  | 2019-08-27 13:47:49,534 [myid:] - INFO  [ProcessThread(sid:0 cport:2181)::PrepRequestProcessor@649] - Got user-level KeeperException when processing sessionid:0x16cd3550d4f0000 type:delete cxid:0x41 zxid:0x1a txntype:-1 reqpath:n/a Error Path:/admin/preferred_replica_election Error:KeeperErrorCode = NoNode for /admin/preferred_replica_election
kafka0_1     | [2019-08-27 13:47:49,617] INFO Creating /brokers/ids/0 (is it secure? false) (kafka.utils.ZKCheckedEphemeral)
zookeeper_1  | 2019-08-27 13:47:49,619 [myid:] - INFO  [ProcessThread(sid:0 cport:2181)::PrepRequestProcessor@649] - Got user-level KeeperException when processing sessionid:0x16cd3550d4f0000 type:create cxid:0x4b zxid:0x1b txntype:-1 reqpath:n/a Error Path:/brokers Error:KeeperErrorCode = NodeExists for /brokers
zookeeper_1  | 2019-08-27 13:47:49,629 [myid:] - INFO  [ProcessThread(sid:0 cport:2181)::PrepRequestProcessor@649] - Got user-level KeeperException when processing sessionid:0x16cd3550d4f0000 type:create cxid:0x4c zxid:0x1c txntype:-1 reqpath:n/a Error Path:/brokers/ids Error:KeeperErrorCode = NodeExists for /brokers/ids
kafka0_1     | [2019-08-27 13:47:49,634] INFO Result of znode creation is: OK (kafka.utils.ZKCheckedEphemeral)
kafka0_1     | [2019-08-27 13:47:49,638] INFO Registered broker 0 at path /brokers/ids/0 with addresses: EndPoint(kafka-zookeeper,9092,ListenerName(PLAINTEXT),PLAINTEXT) (kafka.utils.ZkUtils)
kafka0_1     | [2019-08-27 13:47:49,653] WARN No meta.properties file under dir /tmp/kafka-logs/meta.properties (kafka.server.BrokerMetadataCheckpoint)
kafka0_1     | [2019-08-27 13:47:49,691] INFO Kafka version : 1.0.0 (org.apache.kafka.common.utils.AppInfoParser)
kafka0_1     | [2019-08-27 13:47:49,704] INFO Kafka commitId : aaa7af6d4a11b29d (org.apache.kafka.common.utils.AppInfoParser)
kafka0_1     | [2019-08-27 13:47:49,707] INFO [KafkaServer id=0] started (kafka.server.KafkaServer) 

37.PNG 

38.PNG 

39.PNG 

40.PNG 

41.PNG

 

root@orderer0:~# mv /home/leejinkwan/testnet.tar.gz ./
root@orderer0:~# tar xvfz testnet.tar.gz
root@orderer0:~# cd testnet/
root@orderer0:~/testnet# vi runOrderer0.sh
ORDERER_GENERAL_LOGLEVEL=info \
ORDERER_GENERAL_LISTENADDRESS=orderer0 \
ORDERER_GENERAL_GENESISMETHOD=file \
ORDERER_GENERAL_GENESISFILE=/root/testnet/crypto-config/ordererOrganizations/ordererorg0/orderers/orderer0.ordererorg0/genesis.block \
ORDERER_GENERAL_LOCALMSPID=OrdererOrg0MSP \
ORDERER_GENERAL_LOCALMSPDIR=/root/testnet/crypto-config/ordererOrganizations/ordererorg0/orderers/orderer0.ordererorg0/msp \
ORDERER_GENERAL_TLS_ENABLED=false \
ORDERER_GENERAL_TLS_PRIVATEKEY=/root/testnet/crypto-config/ordererOrganizations/ordererorg0/orderers/orderer0.ordererorg0/tls/server.key \
ORDERER_GENERAL_TLS_CERTIFICATE=/root/testnet/crypto-config/ordererOrganizations/ordererorg0/orderers/orderer0.ordererorg0/tls/server.crt \
ORDERER_GENERAL_TLS_ROOTCAS=[/root/testnet/crypto-config/ordererOrganizations/ordererorg0/orderers/orderer0.ordererorg0/tls/ca.crt,/root/testnet/crypto-config/peerOrganizations/org0/peers/peer0.org0/tls/ca.crt] \
CONFIGTX_ORDERER_BATCHTIMEOUT=1s \
CONFIGTX_ORDERER_ORDERERTYPE=kafka \
CONFIGTX_ORDERER_KAFKA_BROKERS=[kafka-zookeeper:9092] \
orderer
~
root@orderer0:~/testnet# chmod 777 runOrderer0.sh
root@orderer0:~/testnet# ./runOrderer0.sh
2019-08-27 22:52:28.847 KST [localconfig] completeInitialization -> INFO 001 Kafka.Version unset, setting to 0.10.2.0
2019-08-27 22:52:28.870 KST [orderer.common.server] prettyPrintStruct -> INFO 002 Orderer config values:
        General.LedgerType = "file"
        General.ListenAddress = "orderer0"
        General.ListenPort = 7050
        General.TLS.Enabled = false
        General.TLS.PrivateKey = "/root/testnet/crypto-config/ordererOrganizations/ordererorg0/orderers/orderer0.ordererorg0/tls/server.key"
        General.TLS.Certificate = "/root/testnet/crypto-config/ordererOrganizations/ordererorg0/orderers/orderer0.ordererorg0/tls/server.crt"
        General.TLS.RootCAs = [/root/testnet/crypto-config/ordererOrganizations/ordererorg0/orderers/orderer0.ordererorg0/tls/ca.crt /root/testnet/crypto-config/peerOrganizations/org0/peers/peer0.org0/tls/ca.crt]
        General.TLS.ClientAuthRequired = false
        General.TLS.ClientRootCAs = []
        General.Cluster.ListenAddress = ""
        General.Cluster.ListenPort = 0
        General.Cluster.ServerCertificate = ""
        General.Cluster.ServerPrivateKey = ""
        General.Cluster.ClientCertificate = ""
        General.Cluster.ClientPrivateKey = ""
        General.Cluster.RootCAs = []
        General.Cluster.DialTimeout = 5s
        General.Cluster.RPCTimeout = 7s
        General.Cluster.ReplicationBufferSize = 20971520
        General.Cluster.ReplicationPullTimeout = 5s
        General.Cluster.ReplicationRetryTimeout = 5s
        General.Cluster.ReplicationBackgroundRefreshInterval = 5m0s
        General.Cluster.ReplicationMaxRetries = 12
        General.Cluster.SendBufferSize = 10
        General.Cluster.CertExpirationWarningThreshold = 168h0m0s
        General.Cluster.TLSHandshakeTimeShift = 0s
        General.Keepalive.ServerMinInterval = 1m0s
        General.Keepalive.ServerInterval = 2h0m0s
        General.Keepalive.ServerTimeout = 20s
        General.ConnectionTimeout = 0s
        General.GenesisMethod = "file"
        General.GenesisProfile = "SampleInsecureSolo"
        General.SystemChannel = "test-system-channel-name"
        General.GenesisFile = "/root/testnet/crypto-config/ordererOrganizations/ordererorg0/orderers/orderer0.ordererorg0/genesis.block"
        General.Profile.Enabled = false
        General.Profile.Address = "0.0.0.0:6060"
        General.LocalMSPDir = "/root/testnet/crypto-config/ordererOrganizations/ordererorg0/orderers/orderer0.ordererorg0/msp"
        General.LocalMSPID = "OrdererOrg0MSP"
        General.BCCSP.ProviderName = "SW"
        General.BCCSP.SwOpts.SecLevel = 256
        General.BCCSP.SwOpts.HashFamily = "SHA2"
        General.BCCSP.SwOpts.Ephemeral = false
        General.BCCSP.SwOpts.FileKeystore.KeyStorePath = "/root/testnet/crypto-config/ordererOrganizations/ordererorg0/orderers/orderer0.ordererorg0/msp/keystore"
        General.BCCSP.SwOpts.DummyKeystore =
        General.BCCSP.SwOpts.InmemKeystore =
        General.BCCSP.PluginOpts =
        General.Authentication.TimeWindow = 15m0s
        General.Authentication.NoExpirationChecks = false
        FileLedger.Location = "/var/hyperledger/production/orderer"
        FileLedger.Prefix = "hyperledger-fabric-ordererledger"
        RAMLedger.HistorySize = 1000
        Kafka.Retry.ShortInterval = 5s
        Kafka.Retry.ShortTotal = 10m0s
        Kafka.Retry.LongInterval = 5m0s
        Kafka.Retry.LongTotal = 12h0m0s
        Kafka.Retry.NetworkTimeouts.DialTimeout = 10s
        Kafka.Retry.NetworkTimeouts.ReadTimeout = 10s
        Kafka.Retry.NetworkTimeouts.WriteTimeout = 10s
        Kafka.Retry.Metadata.RetryMax = 3
        Kafka.Retry.Metadata.RetryBackoff = 250ms
        Kafka.Retry.Producer.RetryMax = 3
        Kafka.Retry.Producer.RetryBackoff = 100ms
        Kafka.Retry.Consumer.RetryBackoff = 2s
        Kafka.Verbose = false
        Kafka.Version = 0.10.2.0
        Kafka.TLS.Enabled = false
        Kafka.TLS.PrivateKey = ""
        Kafka.TLS.Certificate = ""
        Kafka.TLS.RootCAs = []
        Kafka.TLS.ClientAuthRequired = false
        Kafka.TLS.ClientRootCAs = []
        Kafka.SASLPlain.Enabled = false
        Kafka.SASLPlain.User = ""
        Kafka.SASLPlain.Password = ""
        Kafka.Topic.ReplicationFactor = 3
        Debug.BroadcastTraceDir = ""
        Debug.DeliverTraceDir = ""
        Consensus = map[WALDir:/var/hyperledger/production/orderer/etcdraft/wal SnapDir:/var/hyperledger/production/orderer/etcdraft/snapshot]
        Operations.ListenAddress = "127.0.0.1:8443"
        Operations.TLS.Enabled = false
        Operations.TLS.PrivateKey = ""
        Operations.TLS.Certificate = ""
        Operations.TLS.RootCAs = []
        Operations.TLS.ClientAuthRequired = false
        Operations.TLS.ClientRootCAs = []
        Metrics.Provider = "disabled"
        Metrics.Statsd.Network = "udp"
        Metrics.Statsd.Address = "127.0.0.1:8125"
        Metrics.Statsd.WriteInterval = 30s
        Metrics.Statsd.Prefix = ""
2019-08-27 22:52:28.890 KST [orderer.common.server] extractSysChanLastConfig -> INFO 003 Bootstrapping because no existing channels
2019-08-27 22:52:28.899 KST [fsblkstorage] newBlockfileMgr -> INFO 004 Getting block information from block storage
2019-08-27 22:52:28.916 KST [orderer.consensus.kafka] newChain -> INFO 005 [channel: testchainid] Starting chain with last persisted offset -3 and last recorded block [0]
2019-08-27 22:52:28.917 KST [orderer.commmon.multichannel] Initialize -> INFO 006 Starting system channel 'testchainid' with genesis block hash 90d293de53905727877600892b5a71433cad315c4d575231948ddbb8bb49d6dd and orderer type kafka
2019-08-27 22:52:28.918 KST [orderer.consensus.kafka] setupTopicForChannel -> INFO 007 [channel: testchainid] Setting up the topic for this channel...
2019-08-27 22:52:28.920 KST [orderer.common.server] Start -> INFO 008 Starting orderer:
 Version: 1.4.3
 Commit SHA: 1e54df2ea
 Go version: go1.10.4
 OS/Arch: linux/amd64
2019-08-27 22:52:28.921 KST [orderer.common.server] Start -> INFO 009 Beginning to serve requests

 

 

 

 

 


 

댓글목록

등록된 댓글이 없습니다.


개인정보취급방침 서비스이용약관 모바일 버전으로 보기 상단으로

TEL. 063-469-4551 FAX. 063-469-4560 전북 군산시 대학로 558
군산대학교 컴퓨터정보공학과

Copyright © www.leelab.co.kr. All rights reserved.