Srinisspace - Tech

 "Last updated on Dec 25, 2021"

IBM MQ is one of the major market players in Message Oriented Middleware Technologies. Here in this post, I have tried to give some important questions and their answers.

1) What is IBM MQ?

Message Queuing is a technology used to send and receive messages between different applications and platforms. IBM MQ is a Message Queuing product from IBM.

2) How to install MQ in Linux/Unix server?

Below are steps to install IBM MQ 9.2. MQ 9.2 can be installed in a 64-bit Linux® system and needs root access to install

  • Login/switch as root on the Linux server
  • Run "./mqlicense.sh -text_only"
  • Use "rpm -ivh MQSeriesRuntime-.rpm MQSeriesServer-.rpm" (To install runtime and server components) or "rpm -ivh MQSeries*.rpm" (To install all components) to install MQ in default location

Note: If you have another MQ version already installed, you have to use "crtmqpkg" to set the path location before installing. For more details, refer "https://www.ibm.com/support/knowledgecenter/SSFKSJ_9.2.0/com.ibm.mq.ins.doc/q009010_.htm"

3) How to create a Queue Manager with parameters?

To create a QM with Circular logging, run 'crtmqm -c "Descriptive text" -u "DLQ" -lc -ld "LogPath" -lf "LogFilePages" QM_Name'

To create a QM with Linear logging, run 'crtmqm -c "Descriptive text" -u "DLQ" -ll -lp "Primary log count" -ls "Secondary log count" -ld "LogPath" -lf "LogFilePages" QM_Name'

Parameter details:
  • c - Descriptive text about QM
  • u - Dead Letter Queue name
  • lc - To create the QM with Circular Logging
  • ll - To create the QM with Linear Logging
  • lp - Number of Primary log files (Minimum is 2 and System Default is 3 if not mentioned)
  • ls - Number of Secondary log files (Minimum and System Default is 2)
  • ld - The path in which the log files should be stored
  • lf - the number of the log file pages (minimum is 64 and System default is 4096)

For more details "https://www.ibm.com/support/knowledgecenter/en/SSFKSJ_8.0.0/com.ibm.mq.ref.adm.doc/q083120_.htm"

4) What is a distributed queuing environment?

Distributed Queuing Environment is, configuring queue managers to send and receive message between them to avoid creating multiple channels between each and every application to queue managers. It reduces the complexity in creating communication between applications.

5) What are the types of Queues available in MQ?

  • Local Queue - Queues which are local to a QM and can be used to store messages
  • Remote Queue Definitions - Queue definitions which are used to send messages to a a local queue in another Queue Manager
  • Transmit Queues(XMITQs) - A type of Local queue which is used to send messages to a remote Queue Manager
  • Alias Queues - An alias which is used to point another queue or topic. It cannot store any message.
  • Model Queue - Model queue is a template which contains queue definition. It will be used to create dynamic queues.
  • Dynamic Queues - Queues created dynamically upon request from application. It can be temporary or permanent queue.

6) How to create a local queue?

define qlocal("Q_Name") maxdepth("maximum capacity of the queue") maxmsgl("Maximum message length") defpsist(Yes/No) descr("description of the queue")

7) How to create a Remote Queue Definition?

define qremote("Q_Name") XMITQ("Transmit queue name") rname("Destination Local Queue Name") rqmname("Destination Queue Manager Name")

8) How to create Transmit Queue?

define qlocal("Q_Name") usage(XMITQ) descr("Description")

9) How to create SDR and RCVR Channels?

 define channel("Channel_Name") chltype(SDR) conname("Ipaddress or server name") xmitq("Transmit Queue Name") descr("Description")

define channel("Channel_Name") chltype(RCVR) descr("Description")

10) How to create SVRCONN channel?

define channel("Channel_Name") chltype(SVRCONN) trptype(TCP) descr("Description")

11) What is a multi-instance queue manager?

Multi-Instance Queue Manager is a configuration in which a queue manager will be configured in two servers sharing same file system for MQ operations. One will act as Primary and another one will act as Standby. Whenever the QM in Primary goes down, The QM in secondary server comes to running state and starts to service the requests.

12) The concept behind multi-instance queue manager

In Multi-instance Queue Manager concept, a QM will be configured in two servers. Both the two servers should have the same mq file systems mounted and same level of permissions should be given.

While starting the QM in Primary server, it will acquire the file lock on qm.ini file and the QM in Secondary server will try to acquire the file lock and keep on trying. So when the primary QM goes down, the file lock will be released and the Secondary QM will acquire the lock, start to service the requests.

13) How to create a multi-instance queue manager

  • MQ file systems should be mounted on both the Primary and Secondary servers.
  • mqm user and groups should have same level of access for the mounted file systems in both the servers.
  • Create the QM in primary server with crtmqm command.
  • run "dspmqinf" command and get the output.
  • run "addmqinf " with the parameter we got from previous step in secondary server
  • start QM in Primary server using the command "strmqm -x QM_Name". The server will come up as "RUNNING"
  • start QM in Secondary server using the command "strmqm -x QM_Name". The server will come up as "RUNNING as Standby".

14) What is Cluster in IBM MQ?

MQ Cluster is used to group the Queue Managers logically to avoid multiple channels administration. It reduces the complexity in Distributed Queuing.

The Full repository queue managers have full configuration details about the cluster and partial repository queue managers get the configuration details 'need-to-know' basis.

15) How to configure a cluster?

  • Alter QMGR repos parameter to add a QM to a cluster (alter qmgr repos(CLUS_NAME))
  • Define CLUSRCVR for the Queue Managers (define channel(name) type (CLUSRCVR) trptype(tcp) conname("Alias or IP of same server(port)") cluster(CLUS_NAME))
  • Define CLUSSDR for the Queue Managers (define channel(name) type (CLUSSDR) trptype(tcp) conname("alias or IP of full repos server(post)") cluster(CLUS_NAME))

The first two managers will act as Full Repositories by default. The next queue Managers can be added as partial repos. If a queue manager needs to send message to another queue manager in cluster and it doesn't have the config of the destination QM, then it will get the config details from Full Repos and store it.

Cluster Queues can be created in the Cluster Queue Managers. Any QM in the cluster can put message to that queues, but only local QM can read the messages.

16) What is Pub/Sub concept in MQ?

The traditional Message flow is from a sender to a receiver. But some scenarios may need the messages to send to multiple systems/applications for processing.

To achieve this pub/sub is used. A publisher will be created and 'n' number of subscribers can subscribe to the publisher. Whenever publisher send a message, all the subscribers will get a copy of the message.

17) How to create basic pub/sub structure in MQ?

Creating Pub:

  • Define a topic (define topic("topic_name") topicstr("topic_string") defpsist(yes))
  • Create a alias queue pointing to topic (define qalias("queue_name") target("topic-name") targtype(TOPIC))
  • provide sub access to this topic to a group or principle

If a message is put into the alias queue, it will reach the topic.

Creating Sub:

  • Create a local queue(sub_local_queue) to receive the message
  • Define Sub (define sub(sub_name) topicstr("topic_string") dest(sub_local_queue))

Multiple Sub's can be created for a topic string and each will receive a copy of message if a message is put into the topic.

18. What is Gateway Queue Manager?

Gateway Queue Manager comes into the picture if a single service is running in two separate servers and there is a need to do load balancing between them.

Two Queue Managers will be created with the same interface queue names and a local queue (in each QM) to which the interface queues will be pointing.

Gateway Queue Manager will be clustered with both of these queue managers. When an application attempts to put a message to the interface queues through Gateway QM and since the queue is not managed by Gateway QM, it will search for the queue in the cluster. Since the queue is managed by both the QMs, it will alternatively route the messages to the interface queue in those QMs.

This is a partial questions list only and I will be adding few more questions. Please feel free to update your questions in the comment section, if you want me to add it here.

If you find this post useful, please follow me on Facebook, Twitter and Linkedin to get alerts on new posts. Also please subscribe to the blog.

 In this post, I try to explain all about Brokers in Kafka.

What is a Broker?

The basic component of Kafka Infrastructure is Broker.

Kafka Broker, Kafka Instance, Kafka Node all are same and it is used by producer to post message and consumer to get message. In simple words, Kafka Broker is a running instance of Kafka.

Multiple Brokers forms Kafka Cluster which is used for load balancing.

Lets see the technical details of broker's configuration

Broker configurations are in server.properties configuration file (Path: config/server.properties)

broker.id - it is the unique id of the broker. It can be manually defined and if undefined zookeeper will generate. To avoid conflict between zookeeper generated and user generated broker id's, broker ids generated from reserved.broker.max.id+1. On cluster environment cannot have two brokers with same id.

listeners - contains the list of all possible listeners(urls). We can specify hostname and port to bind to specific interface or 0.0.0.0 to bind to all interfaces or leave it empty to bind to default interfaces (eg: PLAINTEXT://0.0.0.0:port,SSL://:port)

advertised.listeners - is used by the external clients. Listeners are for the internal network and if any external client needa to communicate with brokers then an external ip/port is defined in advertised.listeners(0.0.0.0 is not valid for advertised.listeners). To know more about listener configuraions please refer rmoff's blog.

listener.securiry.protocol.map - this parameter is used to map security protocols for specific listeners. If we are using more than one port and if we need to use different security mapping ls for them, we can define them using this parameter. For example, if internal and external traffic can be seperated using this property as 'INTERNAL:PLAINTEXT, EXTERNAL:SSL'

num.network.threads - it is used to define the number of threads used by the broker to send/receive requests from/to network. The default value is 3 and should start from 1.

num.io.threads - it is used to define the number of threads used by broker to process requests including disk I/O. The default value is 8 and should start from 1.

log.dirs - It is used to define the log path for the Kafka broker.

num.partitions - This parameter defines the default number of partitions for a Kafka broker.

num.recovery.threads.per.data.dir - It defines the number of threads can be used for log recovery.

log.retention.hours - This parameter is used to define the age upto which the log file will be available for processing. After the defined hours it will be removed.

log.retention.bytes - It defines a size based retention policy.

log.segment.bytes - It defines the maximum size of a log file. Once the log file reached the defined size, a new log segment will be created.

log.retention.check.interval.ms - It defines the interval time at which the log segments will be checked for removal as per retention policies.

zookeeper.connect - Defines the Zoo keeper connection string for the broker.

zookeeper.connection.timeout.ms - It sets up the timeout in ms for the zookeeper connection.

I hope this post gave a basic knowledge on what is broker and what are all the configurations of server.properties file. 

 Its been long since my last post. In the previous posts we saw the usecases of Apache Kafka..

In this post we will see some of the basic Admin activities of Kafka..

So what are the activities of a Kafka Admin??

As similar to any Middleware Administrator, it has below topics,

  1. Installing Kafka
  2. Configuring and starting brokers
  3. Setting up clusters
  4. Creating Topics
  5. Configuring Kafka Connect
  6. Configuring Kafka Streams
  7. Troubleshooting
  8. Fine-tuning
  9. Upgrades/Patches
  10. Maintaining inter connectivify with other Apache apps

We will see them one by one..

1. Installing Kafka:

Kafka Installation is very simply. Go to the Apache Kafka official page and download the file.

Once you downloaded, un-tar the file(if you are unfamilar with tar commands please refer here for details)

tar -xzf kafka_2.11-2.0.0.tgz

Lets take this destination dir as the kafka home dir.

2. Configuring brokers:

A broker is nothing but a node which is serving Kafka. Multiple brokers form a Kafka Cluster.

Kafka uses Apache Zookeeper to store information about the brokers and consumers( We will see the uses of Zookeeper in Kafka in a different post).

So to start a broker, we have to start Zookeeper prior to that.

If you have a clustered zookeeper environment which is already installed then please use it for Kafka.

To Install/Administer Zookeeper clusters for high availability, please see the zookeeper admin guide.

Else you can use the single node zookeeper instance packaged with Kafka.

bin/zookeeper-server-start.sh config/zookeeper.properties (run from Kafka home dir)

Once zookeeper is started, you can start the broker with the below command,

bin/kafka-server-start.sh config/server.properties (from Kafka home dir)

By default the broker name is server and you can change it.

Note: bin directory in kafka home dir contains all scripts and config dir contains config related files.

3. Setting up Clusters:

Single node environment can be used for testing/dev purposes. But we usually need clustered Kafka environment for high availability and fault tolerance.

Multiple Kafka brokers form Kafka cluster. The Zoo-keeper has the updated list of brokers and helps in cluster formation.

To create a cluster, we need to have multiple instance of broker in same or different nodes and so, when creating a topic we can define the cluster by registering the topic with replication details in zookeeper.

Generic commands from official site are as below,

Create multiple copies of server.properties in config folder (else better copy the entire kafka folder and rename server.properties file as per requirement)

cp config/server.properties config/server-1.properties

cp config/server.properties config/server-2.properties

Below are values need to be changed in the files,

broker.id=<1,2,3.. depending upon the number of brokers> (every broker registers in Zookeeper need to have unique id)

listeners=host://:port (if you are using same machine, please give different ports)

log.dirs= (if you are going to use shared path, please provide different log files like kafka-log-1, kafka-log-2)

Start the brokers using the comamnds,

bin/kafka-server-start.sh config/server-1.properties &

bin/kafka-server-start.sh config/server-2.properties &

To know more about thhe syntax, please refer official page.

How cluster works ?

Zoo-keeper manages the cluster and co-ordinate the brokers. It keeps track of all brokers and its availability and also start the eclection process incase of any broker failure to elect the new leader for a topic partition.

4. Creating topics:

Creating topic in kafka is very easy. The generic command to create a kafka topic for single broker,

bin/kafka-topics.sh –create –zookeeper –replication-factor 1 –partitions 1 –topic (from Kafka home dir)

kafka-topics.sh is the standard script comes with kafka installation to administer topic

Parameter’s definition:

–create – parameter used to create a topic

–zookeeper – used to give the zookeeper listener details

–replication-factor – used to create clusters and define how many brokers we are going to use for this topic

–partition – used to define the number of partitions we are going to create.

–topic – used to input the topic name

For cluster environments the command is as below,

bin/kafka-topics.sh –create –zookeeper –replication-factor 3 –partitions 1 –topic (from Kafka home dir)

In the above command replication factor is 3, means 3 servers will be in the cluster for this topic.

You can check the topics list in a broker using,

bin/kafka-topics.sh –list –zookeeper (from kafka home dir)

Also to check the details of a topic,

bin/kafka-topics.sh –describe –zookeeper –topic

We will see elaborately about the connectors in next post.

Please write your suggestions at srinivasanbtechit@gmail.com or comment in the blog.

We have discussed some of the usecases of Apache Kafka in the previous post. We will continue to explore other usecases in this post.

Metrics:

Kafka is often used for operational monitoring data. This involves aggregating statistics from distributed applications to produce centralized feeds of operational data.

Log Aggregation:

Log aggregation means collecting the physical log files from the servers and place them in a centralized place either in a file server or in a Hadoop File System (HDFS) for processing.

Log management plays a vital role in an IT infrastructure and Kafka offers equally good performance to the log aggregation tools like Scribe or Flume.

Stream processing:

Stream processing is an important usecase of Kafka, in which the raw data from a topic is aggregated, enriched, or transformed to new topics for further processing/consumption.

Stream processing is effectively used in many fields for optimized user experience. Some of its uses are,

  • Stock Market Survillance
  • Smart Device applications
  • Geofencing/vehicle tracking
  • Sports tracking

Kafka Streams are used in Apache Kafka to achieve stream processing. Some of the organizations using Kafka Streaming are,

The New York Times

Pinterest

Rabobank

Line

Trivago

More details in https://kafka.apache.org/documentation/streams/

 

Event Sourcing:

Event Sourcing means, to ensure that every state change of an application is maintained in the time-ordered sequence of records.

The idea behind Event Sourcing is, capturing every state change of an application in an event object in the same sequence order in which they are occuring.

Kafka's support for very large stored log data makes it an excellent backend for an application built in this style.

Commit log:

In simple, commit log is a record of transactions, which keeps track of the latest events/commits in a large distributed system. All commits will be recorded to the log in a central system, which keeps track of them before applied. It helps in tracking the failed transactions and retry/resume the inflight operations which got failed.

Kafka can act as an external commit log for a distrubuted system. The log campaction feature in kafka helps this functionality( Apache bookkeeper is another product that gives the same usage as Kafka in Commit log).

Log compaction makes sure that the last known value of each record key for a single topic is retained.  It means, in a series of events in a distributed system, kafka stores the last accepted state and so incase of crash/failure it will restore the order from that point.

 

Lets see some useful admin activities and commands in the next post.....

In previous post, we saw an intro about Apache Kafka. Not like traditional messaging softwares, Kafka is used in variety of use cases. We will see some of them in this post.

Messaging:

Usage of Kafka as a message system has been discussed in the previous post.

Message brokers often act as a middle layer, which is used for various reasons including decoupling both source and destination, Asyncronous message processing, multiple destination routing.

Kafka supports both point-to-point and pub-sub methods by implementing the consumer groups concept. We will see some comparison between Kafka and other messaging systems.

Protocol:

Kafka - Uses a binary protocol which defines all APIs as request response message pairs

RabbitMQ - AMQ protocol

IBM MQ - Supports JMS, MQI

Model:

Kafka - dumb broker / smart consumer model

RabbitMQ - smart broker / dumb consumer model

IBM MQ - smart broker / dump consumer model (depending upon my understanding on smart broker / dump consumer)

License:

Kafka - Open source through Apache License 2.0

RabbitMQ - open Source through Mozilla Public License

IBM MQ - Properiatory software

Client Libraries:

Kafka -  C/C++, python, Go, Erlang, .Net, Clojure, Ruby, Node.js, Proxy (HTTP REST, etc), Perl, PHP, RUST, Alternative Java,Scala DSL, Storm, Swift.

RabbitMQ - Java and Spring, .NET, Ruby, Python, PHP, Objective-C and Swift, Scala, Groovy and Grails, Clojure, JRuby, JavaScript, C / C++, Go, Unity 3D, Erlang, Haskell, OCaml, Perl, Web Messaging, CLI, 3rd party plugins, Common Lisp, COBOL.

IBM MQ - Using MQ client for message transfer. Also Supports NET, ActiveX, C++, Java™, JMS, REST APIs. using MQ lite we can also connect to Node.js, Java, Ruby, Python

Benchmark:

Apache - 821557 recoreds/s (Single producer thread with no replication) . More detailed results are in here.

RabbitMQ - 53710m/s (producer only with no consumer). More detailed results are in here.

IBM MQ - Please see the performance results of IBM MQ in here.

Web Acitivity tracking:

Web Activity Tracking means, tracking a user activity, page views, searches which is used in web analytics for optimized user experience. Web Activity Tracking is often very high in volume, as many activity messages are generated for each user page view.

Web  Activity Tracking needs a system which can transfer millions of data per second in order to provide results related to user's need.

Below are some of the statistics provided by Kafka user's,

LinkedIn - https://engineering.linkedin.com/kafka/running-kafka-scale (blog of Senior Staff Engineer in LinkedIn)

"When combined, the Kafka ecosystem at LinkedIn is sent over 800 billion messages per day which amounts to over 175 terabytes of data. Over 650 terabytes of messages are then consumed daily, which is why the ability of Kafka to handle multiple producers and multiple consumers for each topic is important. At the busiest times of day, we are receiving over 13 million messages per second, or 2.75 gigabytes of data per second. To handle all these messages, LinkedIn runs over 1100 Kafka brokers organized into more than 60 clusters."

Yahoo - https://yahooeng.tumblr.com/post/109994930921/kafka-yahoo

"Kafka is used by many teams across Yahoo. The Media Analytics team uses Kafka in our real-time analytics pipeline. Our Kafka cluster handles a peak bandwidth of more than 20Gbps (of compressed data)."

Tumblr - http://highscalability.com/blog/2012/2/13/tumblr-architecture-15-billion-page-views-a-month-and-harder.html

"Tumblr started as a fairly typical large LAMP application. The direction they are moving in now is towards a distributed services model built around Scala, HBase, Redis, Kafka, Finagle,  and an intriguing cell based architecture for powering their Dashboard. Effort is now going into fixing short term problems in their PHP application, pulling things out, and doing it right using services."

We will see the other usecases in the next post.

Disclaimer:

All the statistics and benchmark results are taken from the respective websites provided in the links. None of them are conducted or tested by me and also not owned by me. The test results and data published in the company websites are property of the respective companies. Please refer the website links for more details.

There may be difference in the data given in the websites (which may have altered, so please refer the direct websites for more detailed data)

Nowadays Kafka is the word which is used by most of the people in Message Oriented Middleware industry. I was introduced to Kafka by my colleague as it may be one of the big player amongst Message Brokers in the cloud era.

What is Kafka?

So as per Kafka's standard definition, it is an open source stream processing software by Apache Software Foundation.

If it is an another message broker, how it differs from the market leaders?

Kafka is a stream-processing software platform.

Ok. What is the meaning of stream-processing? 

Stream processing means, real-time processing of data, continuously, concurrently, and in a record-by-record fashion. Real time processing in Kafka means, it can get the data from topic (source), doing some analysis or processing the data and publish it in another topic (consumer). It is achieved using 'kafka stream', a client library which can process/analyze the data stored in Kafka.

How Kafka differs from traditional Message brokers?

Traditional message brokers handle a message in a way that it can be consumed only once. Once it is consumed, the message won't be available.

In Kafka, the messages are stored in a file system and will be available for a period of time as per the retention policy even though they are consumed already.

Then how it works on message processing?

Some important terminologies before workflow of Kafka,

Kafka cluster: Made up of multiple Kafka brokers(servers)

Topic:  A category or feed name to which the records are published.

Partition: In each topic, cluster will maintain partition log (which is a division of topic and will be same in all other servers in the cluster for same topic).

A topic may have multiple partitions and the partitions are replicated across the brokers. Each of the partition contains messages/records in immutable ordered sequence.

Workflow:

  1. Producer will send the message to a partition in a topic and it will be replicated across all brokers for fault tolerance. Producer can send the message to same by mentioning a key value.
  2. The broker, in which the source write a message in partition acts as leader and replicates the message to the same partitions in all other brokers. Zoo-Keeper is used to monitor the leader and elect a new leader if the leader is down.
  3. The messages are appended in the partitions in same sequence and an offset number will be created for each message.
  4. When a consumer registered to a topic, the offset number is shared to the consumer and  consumer can start consuming the new message upon arrival.
  5. Upon each and every message consumption by consumer, the offset advances linearly along the partition log and the same will be kept in zoo keeper. In-case of connection drop issues, zoo-keeper will notify consumer on the last successful offset number on re-connection.
  6. Unlike traditional brokers, the consumer can skip/rewind to desired offset and consume the messages.

So how to avoid duplication?

Using offset value, the consumer will consume the message. Consumer and Zoo-Keeper will know a consumer process current offset value. If the connection  terminates, zoo-keeper will keep track of current offset value for a consumer and it will be shared to consumer once the connection is established again.

How point-to-point and pub-sub is achieved in Kafka?

The consumers register themselves in a group called consumer group and consumer group subscribe to a topic. When a message published to a topic, it is delivered to one consumer in a consumer group.

At any point of time, the total number of consumers in a consumer group should not exceed the total number of partitions, else some consumers in that group may remain idle until existing consumer dies.

If there is a Kafka cluster contains two brokers, each one of them has two partitions and one consumer group with two consumers then all the consumers in that group will have two connections from the partitions.

If there is a consumer group with four consumers then each one will have one connection. If any new consumer added, it has to wait for some other consumer in the same group to exit.

Point-to-point:

The consumer group allows you to divide up processing over a collection of processes (the members of the consumer group).

Pub-Sub:

Kafka allows you to broadcast messages to multiple consumer groups.

Means with single existing architecture, it can support both of them. It is achieved by dividing up the processes within a consumer group (point-to-point), in the same time broadcasting the same records to multiple consumer groups from a broker(pub-sub).

As a beginner, I wanted to share some questions which I searched about Kafka initially and hope to write another story with some deep learning in Kafka.

"Last Updated on Aug 19, 2020"

Whenever I refresh Weblogic topics, I wonder why there is not a single page or site to list all the possible ways to start a weblogic server. I searched a lot but couldn't find a page, where all the ways are listed(May be I missed to notice?).

So I thought to write my first Middleware post to list the methods to start weblogic server.

Few of them are,

  1. Using Standard scripts
  2. Using Weblogic Admin console
  3. Using WLST (without nm for admin alone)
  4. Using WLST and Nodemanager
  5. Using Java command directly

Lets see them one by one.

1) Using Standard scripts

The first and easiest way is to use standard startup scripts and it doesn't need nodemanager.

To start the admin server in unix, use 'startWebLogic.sh' in 'BEA_HOME\user_projects\domains\DOMAIN_NAME\bin' (for windows 'startWebLogic.cmd').

To start a managed server in unix, use 'startManagedWebLogic.sh' from the same path mentioned above (for windows 'startManagedWebLogic.cmd'.

Generic syntax:

./startWebLogic.sh

./startManagedWebLogic.sh <managed_server_name> <admin_url>

2) Using Weblogic Admin console

To start/restart the server using console you must configure nodemanager first.

Console will be up only if the admin server is running. Hence starting admin server using console is not a valid option or method.

To start managed server,

  • Login to console
  • Go to Environments ->Servers from left pane
  • Select control tab and start the server.

3) Using WLST (without nm for admin alone)

In this method we can use startServer command in WLST to start the admin server without nodemanager (offline mode). Managed server cannot be started without nm.

General syntax:

'java weblogic.WLST' to connect to wlst.

To start admin server: startServer('admin_server','domain_name','admin_url','username','password','domain_dir')

To start a managed server, launch wlst and then connect to admin server using,

connect('username','password','admin_url')

start('managed_server','Server','managed_server_url')

4) Using WLST and Nodemanager

In this method, we can use nm commands to start both admin and managed servers.

General syntax :

Nodemanager should be up and running.

Invoke wlst using 'java weblogic.WLST'

Connect to nm from wlst offline using 'nmConnect(' nm_username', 'nm_password', 'nm_host', 'nm_port', 'domainname', 'domain_dir','ssltype')'

Once connected, use 'nmStart('admin_server')' to start admin node

To start managed node you have to connect to nodemanager running in the managed server host and then use the same command as above.

'nmConnect(' nm_username', 'nm_password', 'nm_host', 'nm_port', 'domainname', 'domain_dir','ssltype')'

'nmStart('managed_server_name')'

In this method if you want to start multiple nodes reside in multiple servers, you have to connect to the nm running in the server and then you have to start.

5) Using Java command directly

Using direct java command you can start the server.

To start admin server, first set the env using below command

WL_HOME/server/bin/setWLSEnv.sh (for windows use cmd file)

Once the env is set go to the domain dir and run the below command

Domain dir : BEA_HOME\user_projects\domains\DOMAIN_NAME

java weblogic.Server

If admin is up and you have already defined the managed server for that domain, then you can use the below ommand to start managed server

'java -Dweblogic.Name=managed_server
-Dweblogic.management.serveradmin_url
weblogic.Server'

All the options and parameters mentioned above are basics.. You can find more parameters to fine tune and customize your environment.

For inputs and suggestions, please comment.

Search This Blog

Powered by Blogger.

Blog Archive

IBM MQ Basics

  "Last updated on Dec 25, 2021" IBM MQ is one of the major market players in Message Oriented Middleware Technologies. Here ...