HDB++ for cassandra

Hi Jyotin,

If you're using a version expecting DbHost property, in the Cassandra HDB++ version, this corresponds to the list of Cassandra contact points (comma separated hosts list) used by the Cassandra C++ driver to connect to your Cassandra cluster.
Here is what they say about that in the Cassandra C++ driver doc:
The contact points are used to initialize the driver and it will automatically discover the rest of the nodes in your cluster.
Perfomance Tip: Include more than one contact point to be robust against node failures.

Please note that the Cassandra C++ driver will take care of distributing the load between the different nodes…
If you have a cluster with 2 nodes N1 and N2, and if you put only N1 in your DbHost property, the C++ driver will contact N1 to discover your Cassandra cluster and will then be aware that N2 is also part of your cluster.
The C++ Cassandra driver will then contact N1 or N2. You don't have to take care of load balancing or of the distribution of the data in your cluster.

In recent versions of libhdb++, this has been replaced with LibConfiguration property, which is an array of strings.
The properties specific to the HDB++ backend library are now all embedded into LibConfiguration property. They are under the format:
prop1=value1
prop2=value2

For the Cassandra HDB++ library, the configuration parameters must contain the following strings:

- Mandatory:
  • contact_points: Cassandra cluster contact point hostname, eg cassandra_host1,cassandra_host_2,…,cassandra_host_n
  • given as a comma separated list. The contact points are used to initialize the driver and it will automatically discover the rest of the nodes in your Cassandra cluster. Tip: include more than one contact point to be robust against node failures.
  • keyspace: Keyspace to use within the cluster, eg, hdb_test
  • libname: Name of the libhdb++cassandra.so file. This is the library which
  • will be dynamically loaded by the HDBEventSubscriber and HDBConfigurationManager device servers. Typically, this will be set to "libhdb++cassandra.so", but you could specify another file name. You will need to have the directory where libhdb++cassandra.so is installed in the LD_LIBRARY_PATH environment variable used by HDBEventSubscriber and HDBConfigurationManager device servers.
- Optional:
  • user: Cluster log in user name
  • password: Password for above user name
  • local_dc: Datacenter name used for queries with LOCAL consistency level (e.g. LOCAL_QUORUM).
  • In the current version of this library, all the statements are executed with LOCAL_QUORUM consistency level.
- Debug:
  • logging_enabled: Either true to enable command line debug, or false to disable
  • cassandra_driver_log_level: Cassandra logging level, see CassLogLevel in Datastax
  • documentation. This must be one of the follow values:
    • TRACE: Equivalent CASS_LOG_TRACE
    • DEBUG: Equivalent CASS_LOG_DEBUG
    • INFO: Equivalent CASS_LOG_INFO
    • WARN: Equivalent CASS_LOG_WARN
    • ERROR: Equivalent CASS_LOG_ERROR
    • CRITICAL: Equivalent CASS_LOG_CRITICAL
    • DISABLED: Equivalent CASS_LOG_DISABLED

As you can guess, DbHost has been replaced with contact_points from LibConfiguration property in recent versions.

In recent versions of HDB++ libraries and device servers, the library is dynamically loaded, so you will need to have the directory where libhdb++cassandra.so is installed in the LD_LIBRARY_PATH environment variable used by HDBEventSubscriber and HDBConfigurationManager device servers.

I guess you are just starting to play with Cassandra, but please note that it is usually not recommended to have only 2 nodes…
For high availability, you will need at least 3 nodes with a replication factor >= 3.
I strongly encourage anyone willing to install Cassandra in production to get some training on Cassandra before because Cassandra is not a traditional database and requires some knowledge to be operated smoothly.
I can recommend the Datastax academy videos which are very well done:
https://academy.datastax.com/courses

Kind regards,
Reynald
Rosenberg's Law: Software is easy to make, except when you want it to do something new.
Corollary: The only software that's worth making is software that does something new.
Hi Reynald & Lorenzo,

Thanks for the detailed answer. It resolves my query!

The link to Cassandra documentation is helpful. I will go through it.

Kind regards,
Jyotin
Hello Community members,

I want to explore the HDB++ Archiving solution with Cassandra backend.

I have successfully compiled libhdbpp-cassandra on my system. HDB++ ES and HDB++ CM are also running without errors. I have a single node cluster running on my system and wanted to test the archiving. However, the data is not getting archived into Cassandra backend. While adding attributes through the Configurator GUI, I get the following error on HDB++ CM:

Error (create_and_cache_prepared_object:512) Failed to prepare statement for query: SELECT att_conf_id,data_type,ttl FROM hdb.att_conf WHERE att_name = ? AND cs_name = ?. Error: No hosts available

Please find the configuration settings and software version installed on my system as below:

Software versions
UBUNTU 16.04
TANGO 9.2.2
OMNIORB 4.2.1
ZEROMQ 4.0.7
LIBUV 1.4.2 (along with dev package)
CASSANDRA_CPP_DRIVER 2.2.1 (along with dev package)
APACHE CASSANDRA - 2.2.11

Settings in cassandra.yaml file
seeds: "127.0.0.1"
listen_address: localhost
start_rpc: false
rpc_address: localhost
rpc_port: 9160
endpoint_snitch: SimpleSnitch

Settings in LibConfiguration property in JIVE
host=localhost
user=root
password=
keyspace=hdb
contact_points=localhost
consistency=LOCAL_QUORUM
port=9042 (tried changing it to port=9160 but still the error stays)
libname=libhdb++cassandra.so

I am also attaching cassandra.yaml file with this post. It seems that I am missing some Cassandra settings as the error says, "No hosts available." Any guidance on resolving this error will be helpful.

Regards,
Apurva Patkar
Hi,

In our production cluster, we have the 2 following properties blank:

listen_address:
rpc_address:
As written in cassandra.yaml comments:

# Leaving it blank leaves it up to InetAddress.getLocalHost(). This
# will always do the Right Thing _if_ the node is properly configured
# (hostname, name resolution, etc), and the Right Thing is to use the
# address associated with the hostname (it might not be).

We also have:

start_rpc: true
endpoint_snitch: GossipingPropertyFileSnitch
in our production cluster. If you go for a production cluster, you should use that endpoint_snitch. For tests you can survive with SimpleSnitch.

If you add more nodes, you will need to change the seeds.
We have activated the thrift server (start_rpc: true) because we have are still using an old OpsCenter version to monitor the nodes sometimes.

I think the first thing to do is to try to see whether you can connect to your Cassandra DB using Cassandra standard tools like cqlsh.

In our case,
cqlsh localhost
will return the following error:
Connection error: ('Unable to connect to any servers', {'127.0.0.1': error(111, "Tried connecting to [('127.0.0.1', 9042)]. Last error: Connection refused")})

But in our case if we use:
cqlsh <your_host_name>
It should work.

Hoping this helps a bit.
Reynald
Rosenberg's Law: Software is easy to make, except when you want it to do something new.
Corollary: The only software that's worth making is software that does something new.
Hi Reynald,

Thanks for your inputs. I made the suggested changes in cassandra.yaml file.
Settings in cassandra.yaml file:

seeds: "127.0.0.1,192.168.113.33"
listen_address:
rpc_address:
start_rpc: true
rpc_port: 9160
endpoint_snitch: GossipingPropertyFileSnitch

While running cassandra database, I get following error:
ERROR Cannot start node if snitch's data center (dc1) differs from previous data center (datacenter1). Please fix the snitch configuration, decommission and rebootstrap this node or use the flag -Dcassandra.ignore_dc=true.


Therefore for time being I changed endpoint_snitch as below and kept other properties same:
endpoint_snitch: SimpleSnitch

With this configuration in cassandra.yaml, cassandra database runs successfully. HDB++ ES and HDB++ CM are also running without errors. While adding attributes through the Configurator GUI, I get the same error on HDB++ CM:
Error (create_and_cache_prepared_object:512) Failed to prepare statement for query: SELECT att_conf_id,data_type,ttl FROM .att_conf WHERE att_name = ? AND cs_name = ?. Error: No hosts available
Settings in LibConfiguration property in JIVE:
host=apurva-pc
user=rajiv
password=
dbname=hdb
consistency=LOCAL_QUORUM
contact_points=192.168.113.33
keyspace=
port=9042
libname=libhdb++cassandra.so

I am attaching cassandra.yaml and error screenshots with this post. Could you please provide your inputs on this?
Hi,

I strongly recommend you to get familiar with the Cassandra database and how to configure properly the snitch and how to operate and maintain the database. This is especially important with Cassandra, which is a distributed NoSQL database.
Cassandra is very different compared to a database like MySQL.
It is very important to understand the basic concepts in order to configure and use it properly.

There are some configuration files involved when you use GossipingPropertyFileSnitch.
I guess you will just need to update /etc/cassandra/cassandra-rackdc.properties and put the same datacenter name as you used to use before there instead of DC1.

Please refer to Apache Cassandra documentation to understand what you are doing.
I can also advise you to have a look at the Datastax Academy online training (https://academy.datastax.com).
There are many videos explaining all the Cassandra basics and there is one about the snitch.

I will also repeat my previous advice, try to use a Cassandra standard tool like cqlsh and see whether you can connect with this tool.

Hoping this helps,
Reynald
Rosenberg's Law: Software is easy to make, except when you want it to do something new.
Corollary: The only software that's worth making is software that does something new.
Edited 5 years ago
Hi Reynald,

Thanks for your help. The issues are resolved.

Actually, we had a custom name for the datacenter and missed to provide the same in the local_dc parameter of LibConfiguration property. After providing the above parameter, the error "No hosts available" got resolved. Thanks to this page of TANGO documentation which helped to solve the issue.

Also, we have updated the datacenter name in cassandra-rackdc.properties that enabled us to use GossipingPropertyFileSnitch.

Regards,
Apurva Patkar
 
Register or login to create to post a reply.