NAME
Kafka::Cluster - object interface to manage a test kafka cluster.
VERSION
This documentation refers to Kafka::Cluster
version 1.08 .
SYNOPSIS
# For examples see:
# t/*_cluster.t, t/*_cluster_start.t, t/*_connection.t, t/*_cluster_stop.t
DESCRIPTION
This module is not intended to be used by the end user.
The main features of the Kafka::Cluster
module are:
Automatic start and stop of local zookeeper server for tests.
Start-up, re-initialize, stop cluster of kafka servers.
A free port is automatically selected for started servers.
Create, delete data structures used by servers.
Getting information about running servers.
Connection to earlier started cluster.
Perform query to a cluster.
EXPORT
The following constants are available for export
$START_PORT
Initial port number to start search for a free port - 9094. Zookeeper server uses the first available port.
$DEFAULT_TOPIC
Default topic name.
CONSTRUCTOR
new
Starts server required for cluster or provides ability to connect to a running cluster. A zookeeper server is launched during the first call to start. Creates a Kafka::Cluster
object.
An error causes program to halt.
Port is used to identify a particular server in the cluster. The structures of these servers are created in the t/data
.
new()
takes arguments in key-value pairs. The following arguments are recognized:
kafka_dir => $kafka_dir
-
The root directory of local Kafka installation.
replication_factor => $replication_factor
-
Number of kafka servers to be started in cluster.
Optional, default = 3.
partition => $partitions
-
The number of partitions per created topic.
Optional, default = 1.
reuse_existing => $reuse_existing
-
Connect to previously created cluster instead of creating a new one.
Optional, default = false (creates and runs a new cluster).
t_dir => $t_dir
-
Required data structures are prepared to work in provided directory
t/
. When connecting to a cluster from another directory, you must specify path tot/
directory.Optional - not specified (operation carried out in the directory
t/
).
METHODS
The following methods are defined for Kafka::Cluster
class:
base_dir
Returns the root directory of local installation of Kafka.
log_dir( $port )
Constructs and returns the path to kafka server data directory with specified port.
This function takes argument. The following arguments are supported:
servers
Returns a sorted list of ports of all kafka servers in the cluster.
node_id( $port )
Returns node ID assigned to kafka server in the cluster. Returns C <undef>, if server does not have an ID or no server with the specified port is present in the cluster.
This function takes argument the following argument:
zookeeper_port
Returns port number used by zookeeper server.
init
Initializes data structures used by kafka servers. At initialization all servers are stopped and data structures used by them are deleted. Zookeeper server does not get stopped, its data structures is not removed.
stop( $port )
Stops kafka server specified by port. If port is omitted stops all servers in the cluster.
This function takes the following argument:
start( $port )
Starts (restarts) kafka server with specified port. If port is not specified starts (restarts) all servers in the cluster.
This function takes the following argument:
request( $port, $bin_stream, $without_response )
Transmits a string of binary query to Kafka server and returns a binary response. When no response is expected functions returns an empty string if argument $without_response
is set to true.
Kafka server is identified by specified port.
This function takes the following argument:
$port
-
$port
denoting port number of kafka service. The$port
should be a number. $bin_stream
-
$bin_stream
denoting an empty binary string of request to kafka server.
close
Stops all production servers (including zookeeper server). Deletes all data directories used by servers.
data_cleanup
This function stops all running servers processes, deletes all data directories and service files in t/data
directory.
Returns number of deleted files.
data_cleanup()
takes arguments in key-value pairs. The following arguments are recognized:
kafka_dir => $kafka_dir
-
The root directory of Kafka installation.
t_dir => $t_dir
-
Required data structures are prepared to work in the directory
t/
. When connected to a cluster from another directory, you must specify path to thet/
directory.Optional - if not specified operation is carried out in the
t/
directory.
DIAGNOSTICS
An error causes script to die automatically. Error message will be displayed on console.
SEE ALSO
The basic operation of the Kafka package modules:
Kafka - constants and messages used by Kafka package modules.
Kafka::Connection - interface to connect to a Kafka cluster.
Kafka::Producer - interface for producing client.
Kafka::Consumer - interface for consuming client.
Kafka::Message - interface to access Kafka message properties.
Kafka::Int64 - functions to work with 64 bit elements of the protocol on 32 bit systems.
Kafka::Protocol - functions to process messages in Apache Kafka's Protocol.
Kafka::IO - low-level interface for communication with Kafka server.
Kafka::Exceptions - module designated to handle Kafka exceptions.
Kafka::Internals - internal constants and functions used by several package modules.
A wealth of detail about Apache Kafka and Kafka Protocol:
Main page at http://kafka.apache.org/
Kafka Protocol at https://cwiki.apache.org/confluence/display/KAFKA/A+Guide+To+The+Kafka+Protocol
SOURCE CODE
Kafka package is hosted on GitHub: https://github.com/TrackingSoft/Kafka
AUTHOR
Sergey Gladkov
CONTRIBUTORS
Alexander Solovey
Jeremy Jordan
Sergiy Zuban
Vlad Marchenko
COPYRIGHT AND LICENSE
Copyright (C) 2012-2017 by TrackingSoft LLC.
This package is free software; you can redistribute it and/or modify it under the same terms as Perl itself. See perlartistic at http://dev.perl.org/licenses/artistic.html.
This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
1 POD Error
The following errors were encountered while parsing the POD:
- Around line 1220:
You forgot a '=back' before '=head1'