Stopping and Starting
The following commands are for Linux distributions.
Restarting Docker
-
To restart docker, first set the docker to start on boot using the
enable
command.$ sudo systemctl enable docker
-
To start docker, run the
start
command.$ sudo systemctl start docker
Shutting Down Cassandra
-
On the deployer node, SSH to the target node.
-
Check Cassandra status.
Datacenter: datacenter1 ======================= Status=Up/Down |/ State=Normal/Leaving/Joining/Moving — Address Load Tokens Owns (effective) Host ID Rack UN 10.128.0.38 1.17 MiB 256 100.0% d134e7f6-408e-43e5-bf8a-7adff055637a rack1
-
To stop Cassandra, find the process ID and run the kill command.
$ pgrep -u autoid -f cassandra | xargs kill -9
-
Check the status again.
nodetool: Failed to connect to '127.0.0.1:7199' - ConnectException: 'Connection refused (Connection refused)'.
Re-Starting Cassandra
-
On the deployer node, SSH to the target node.
-
Restart Cassandra. When the
No gossip backlog; proceeding
message is displayed, hitEnter
to continue.$ cassandra … INFO [main] 2020-11-10 17:22:49,306 Gossiper.java:1670 - Waiting for gossip to settle… INFO [main] 2020-11-10 17:22:57,307 Gossiper.java:1701 - No gossip backlog; proceeding
-
Check the status of Cassandra. Make sure that it is in
UN
status ("Up" and "Normal").$ nodetool status
Shutting Down MongoDB
-
Check the status of the MongDB
$ ps -ef | grep mongod
-
Connect to the Mongo shell.
$ mongo --tls --tlsCAFile /opt/autoid/mongo/certs/rootCA.pem --tlsCertificateKeyFile /opt/autoid/mongo/certs/mongodb.pem --tlsAllowInvalidHostnames --host <ip-address> MongoDB shell version v4.2.9 connecting to: mongodb://<ip-address>:27017/?compressors=disabled&gssapiServiceName=mongodb 2020-10-08T18:46:23.285+0000 W NETWORK [js] The server certificate does not match the hostname. Hostname: <ip-address> does not match CN: mongonode Implicit session: session { "id" : UUID("22c0123-30e3-4dc9-9d16-5ec310e1ew7b") } MongoDB server version: 4.2.9
-
Switch the admin table.
> use admin switched to db admin
-
Authenticate using the password set in
vault.yml
file.> db.auth("root", "Welcome123") 1
-
Start the shutdown process.
> db.shutdownServer() 2020-10-08T18:47:06.396+0000 I NETWORK [js] DBClientConnection failed to receive message from <ip-address>:27017 - SocketException: short read server should be down… 2020-10-08T18:47:06.399+0000 I NETWORK [js] trying reconnect to <ip-address>:27017 failed 2020-10-08T18:47:06.399+0000 I NETWORK [js] reconnect <ip-address>:27017 failed
-
Exit the mongo shell.
$ quit() or <Ctrl-C>
-
Check the status of the MongDB
$ ps -ef | grep mongod no instance of mongod found
Re-Starting MongoDB
-
Re-start the MongoDB service.
$ /usr/bin/mongod --config /opt/autoid/mongo/mongo.conf about to fork child process, waiting until server is ready for connections. forked process: 31227 child process started successfully, parent exiting
-
Check the status of the MongDB
$ ps -ef | grep mongod autoid 9245 1 0 18:48 ? 00:00:45 /usr/bin/mongod --config /opt/autoid/mongo/mongo.conf autoid 22003 6037 0 21:12 pts/1 00:00:00 grep --color=auto mongod
Shutting Down Spark
-
On the deployer node, SSH to the target node.
-
Check Spark status. Make sure that it is up-and-running.
$ elinks http://localhost:8080
-
Stop the Spark Master and workers.
$ /opt/autoid/spark/spark-2.4.4-bin-hadoop2.7/sbin/stop-all.sh localhost: stopping org.apache.spark.deploy.worker.Worker stopping org.apache.spark.deploy.master.Master
-
Check the Spark status again. You should see:
Unable to retrieve htp://localhost:8080: Connection refused
.
Re-Starting Spark
-
On the deployer node, SSH to the target node.
-
Start the Spark Master and workers. Enter the user password on the target node when prompted.
$ /opt/autoid/spark/spark-2.4.4-bin-hadoop2.7/sbin/start-all.sh starting org.apache.spark.deploy.master.Master, logging to /opt/autoid/spark/spark-2.4.4-bin-hadoop2.7/logs/spark-a utoid-org.apache.spark.deploy.master.Master-1.out autoid-2 password: localhost: starting org.apache.spark.deploy.worker.Worker, logging to /opt/autoid/spark/spark-2.4.4-bin-hadoop2.7/l ogs/spark-autoid-org.apache.spark.deploy.worker.Worker-1.out
-
Check the Spark status again. Make sure that it is up-and-running.