Configuring IntelliJ IDEA for Kafka

In the earlier section, we talked about installing and setting up four things for Kafka application development.

  1. JDK 1.8
  2. Single node Kafka cluster
  3. A build tool such as Maven 3
  4. An IDE such as IntelliJ IDEA

We already covered JDK 1.8 and Single node Kafka cluster in the earlier section. In this section, we will learn Installing Maven 3 and IntelliJ IDEA. Once you complete those two items, you will be all set for Kafka development including unit testing and debugging your applications in a local development environment.
Let’s start with Maven 3 installation and configuration.


Installing Maven 3

Apache Maven is one of the most popular and possibly most widely used tools for building and managing a Java-based project. In this book, we will be using Maven 3.6.0 which is the latest Maven version at the time of writing this book. Installing and configuring Maven on Windows 10 machine is straightforward. You can follow steps described below to setup Maven 3.6.0 on a Windows 10 machine.

  1. Download Maven binary zip archive from the official Maven downloads page
  2. Un-compress the downloaded file into your Windows Program Files directory. Alternatively, you can un-compress it at any other location. However, we recommend you extract it in the Program Files directory.
  3. Next step is to add Maven bin directory to your PATH environment variable. If you have uncompressed your Maven archive in the Program Files, then the Maven bin directory should be as given below.
                                                
    C:\Program Files\apache-maven-3.6.0\bin        
                                         

We have already covered the method to modify your PATH environment variable in the earlier section.

  1. Maven uses the JAVA_HOME environment variable. So, ensure that your JAVA_HOME environment variable is properly set up. We have already covered setting the JAVA_HOME environment variable in the earlier section. You can verify your JAVA_HOME setting using the following command on the Windows command prompt.
                                                    
    echo %JAVA_HOME%        
                                             

  1. After your PATH and JAVA_HOME environment variables are configured, you can verify your maven installation using the following command.
                                                        
    mvn -version     
                                                 

The above command should return the Apache Maven version and a bunch of other information. Once your Maven configuration is complete, you can move to the next step and install IntelliJ IDEA.

Installing IntelliJ IDEA

IntelliJ IDEA is one of the most popular IDE for the Java and other JVM based languages. It comes as an Ultimate and Community edition. We will be downloading and installing IntelliJ IDEA Community edition as the Community edition is free, opensource and good enough for our purposes.
You can download IntelliJ IDEA Community edition from the Jet Brains website.
Installing IntelliJ IDEA is straightforward. Start the installer and just follow the on-screen instructions.
The installer should ask you to select an appropriate desktop shortcut. If you are running a 64-bit machine, you should choose 64-bit launcher for the IntelliJ IDE. You should also select .java files to associate with the IntelliJ automatically. IntelliJ installation takes less than five minutes to complete.
Start IntelliJ Idea for the first time using the desktop shortcut. When you start it for the first time, the IDE will ask you for some default settings. Use the following points to help you select those defaults.

  1. The IDE should ask you to import settings from the previous installation. Select “Do not Import setting” radio button because you installed the IDE for the first time.
  2. It should ask you to select the UI theme. There are two options. The Dracula theme and the IntelliJ default theme. This book is using an IntelliJ theme.
  3. After selecting your theme, move on to the “Next: Default plugins.”
  4. The next section allows you to disable some of the default plugins. We recommend that you leave the defaults in this section and move on to the next part.
  5. Finally, the IntelliJ IDEA welcome screen prompts you to create a new project.

Creating First IntelliJ IDEA Project

IntelliJ IDEA welcome screen allows you to create a new project. Creating your first Kafka project using IntelliJ IDEA is little involved. Follow the below steps to create your first project.

  1. Click the “Create New Project” and select Maven in the left side navigation panel.
  2. At the top of the window, it allows you to select or browse to the appropriate SDK. We will be using JDK, and hence, you should navigate to your JAVA_HOME and select the JDK home directory. In the typical case, your JDK should be located at below path.
                                                        
    C:\Program Files\Java\jdk1.8.0_191     
                                                 

The final status of the dialog box should look like the below figure.

IntelliJ IDEA Maven Project
Fig A.1 - IntelliJ IDEA Maven Project
    Press next and provide the following information about your project.
Project Information
Fig A.2 - Project Information

The GroupID uniquely identifies your project across all projects. The GroupID name must follow Java’s package naming rules. You are free to use whatever GroupID you want. However, an excellent way to determine the granularity of the GroupID is to use the project structure.
The ArtifactID is the name of the JAR without a version number. You can choose whatever name you want with lowercase letters and no strange symbols.
You can choose whatever version you want. However, we are using the same version as the Kafka build that we used to test the examples. This will help us to upgrade the code and make it available for other Kafka versions as new versions of Kafka is getting released.


  1. The next step is to select a project name and the project home directory. A good practice is to use the same name as the ArtifactID.
  2. Press Finish and IntelliJ IDEA will ask you a question as shown below.
Enable Auto-Import for Maven Projects
Fig A.3 - Enable Auto-Import for Maven Projects

We recommend you to Enable Auto-Import option. This option will enable IntelliJ IDEA to automatically download all the dependencies as you define them in your POM file.
If you reached so far, you have successfully created a basic structure of your first Kafka project in IntelliJ IDEA. Now you are ready to do following.

  1. Define your project dependencies.
  2. Create and Execute a simple application
  3. Integrate Kafka server tools in the IDE

Let’s cover all these one after other.


Define Project Dependencies

Maven 3 project in the IntelliJ IDEA comes with a default pom.xml file. The default pom.xml file already contains GroupID, ArtifactID, and the version information. The next most important element in the pom.xml file is the maven compiler plugin. You can add maven compiler plugin using the below XML code right after the version element.

                                                        
    <build>
        <plugins>
            <!-- Maven Compiler Plugin-->
            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-compiler-plugin</artifactId>
                <version>3.8.0</version>
                <configuration>
                    <source>1.8</source>
                    <target>1.8</target>
                </configuration>
            </plugin>
        </plugins> 
    </build>     
                                                 

The maven compiler plugin is required to force JDK 1.8 as the default compiler for your project. The default JDK for the maven might not be the Java 8, and hence we need maven compiler plugin to force the compiler to Java 8.
The next essential element is the list of all dependencies. You can include below XML code right after the build element.


                                                        
    <dependencies> 
        <!-- Apache Kafka Clients--> 
        <dependency> 
            <groupId>org.apache.kafka</groupId>
            <artifactId>kafka-clients</artifactId>
            <version>2.0.0</version>
        </dependency>
        <!-- Apache Kafka Streams-->
        <dependency>
            <groupId>org.apache.kafka</groupId>
            <artifactId>kafka-streams</artifactId>
            <version>2.0.0</version>
        </dependency>
        <!-- Apache Log4J2 binding for SLF4J -->
        <dependency>    
            <groupId>org.apache.logging.log4j</groupId>
            <artifactId>log4j-slf4j-impl</artifactId>
            <version>2.11.0</version>
        </dependency>
        <!-- JUnit5 Jupiter -->
        <dependency>
            <groupId>org.junit.jupiter</groupId>
            <artifactId>junit-jupiter-api</artifactId>
            <version>5.3.1</version>
            <scope>test</scope>
        </dependency>
        <!-- JUnit 5 Jupiter Engine -->
        <dependency>
            <groupId>org.junit.jupiter</groupId>
            <artifactId>junit-jupiter-engine</artifactId>
            <version>5.3.1</version>
            <scope>test</scope>
        </dependency>
        <!-- JUnit 5 Jupiter Parameterized Testing -->
        <dependency>
            <groupId>org.junit.jupiter</groupId>
            <artifactId>junit-jupiter-params</artifactId>
            <version>5.3.1</version>
            <scope>test</scope>
        </dependency>
    </dependencies>    
                                                 

The first two dependencies are the Kafka client and Kafka Streams libraries. Those two are the main dependencies for Kafka Streams application.
The next dependency is LOG4J2 binding to SLF4J. Kafka uses SLF4J to raise log events. However, we need to use an appropriate logger to retrieve the Log events back to our IDE and control the level of information thrown to us. LOG4J is one of the most popular option for that purpose. However, LOG4J has already reached its end of life and it is recommended to use LOG4J2. Hence, we include LOG4J2 to SLF4J implementation. This dependency will also pull the LOG4J2 and we will be able to use the Log4J logger in our application as well. I will demonstrate that in the example.
Rest of the three dependencies are for Junit 5. We are using JDK 1.8, and Junit 5 is the standard for unit testing Java 8 applications.
That’s all. These dependencies are generic enough to take you a long way in your Kafka application development. The Book specifies additional dependencies as they are needed for the specific examples. However, the dependencies defined here are the most essential ones for a typical Kafka project.


Configure Log4J2

Finally, we need to add a log4j2.xml file in the project resources. The log4j2.xml file is required because we are using LOG4J2.
Right-click the src/main/resources folder in the IntelliJ project explorer, then select New and then the File menu item. You should name the file as log4j2.xml and paste the below content in the file.

                                                            
    <?xml version="1.0" encoding="UTF-8"?>
    <Configuration status="ERROR">
        <Appenders>
            <Console name="stdout" target="SYSTEM_OUT">
                <PatternLayout pattern="[%d] (%c) - %p %m %n"/>
            </Console>
        </Appenders>
        <Loggers>
            <Root level="error">
                <AppenderRef ref="stdout"/>
            </Root>
            <Logger name="org.apache.kafka.clients" level="warn" additivity="false">
                <AppenderRef ref="stdout"/>
            </Logger>
            <Logger name="guru.learningjournal.kafka.examples" level="trace" additivity="false">
                <AppenderRef ref="stdout"/>
            </Logger>
        </Loggers>
    </Configuration>     
                                                     

The above code represents the most basic Log4J2 configuration. It defines a console appender and formats the output using a simple pattern. The console appender will throw the log events to console, and the IntelliJ IDEA will capture that and show us back in the IDE.
Then we define three loggers.

  1. Root Logger
  2. Kafka Logger
  3. Application Logger

All the three loggers are using console appender. So, all these three loggers will show log messages to the console.
We set the root logger level to error, and that’s standard practice. We don’t want to see all the log entries from everywhere, but we do want to see the error messages. So, we usually set the root logger level to error.
The next logger is the Kafka logger. I have restricted it to Kafka clients package and set the level to warnings. This configuration will show us all warnings messages thrown from the Kafka clients package. This setting is appropriate for the example that we want to execute in this section. However, we will be changing the Kafka logger configuration depending upon the examples.
Finally, the last logger is specific to our application. We want to see everything logged by our application. And hence, I set the level to trace.
That’s all. We are all set to create a simple Kafka application.


Create and Execute a Simple Application

Your project dependencies and log levels are set up. Now we want to create a simple Kafka application and execute it from the IDE. We will create a simple Kafka producer that produces ten messages to your local Kafka broker.

  1. Right-click the src/main/java node in your project explorer and select New and then click Java Class menu item.
  2. Give fully qualified class name with the package name as shown below.
                                                                
    guru.learningjournal.kafka.examples.HelloProducer    
                                                         
  1. The IDE will automatically create a source file with a basic class template. You can replace the file content with the below code.

NOTE: The code listing is trimmed and reformatted for readable listing. It is not intended to be copied from the book. All the examples are available at the book’s GitHub repository.

                                                                
    package guru.learningjournal.kafka.examples;

    import org.apache.kafka.clients.producer.KafkaProducer;
    import org.apache.kafka.clients.producer.ProducerConfig;
    import org.apache.kafka.clients.producer.ProducerRecord;
    import org.apache.kafka.common.KafkaException;
    import org.apache.kafka.common.serialization.IntegerSerializer;
    import org.apache.kafka.common.serialization.StringSerializer;
    import org.apache.logging.log4j.LogManager;
    import org.apache.logging.log4j.Logger;

    import java.util.Properties;
    
    public class HelloProducer { 
        private static final Logger logger = LogManager.getLogger(HelloProducer.class);
        
            public static void main(String[] args) {
            String topicName;
            int numEvents;
            
            if (args.length != 2) {
                System.out.println("Please provide command line arguments: topicName numEvents");
                System.exit(-1);
            }
            topicName = args[0];
            numEvents = Integer.valueOf(args[1]);
            logger.info("Starting HelloProducer...");
            logger.debug("topicName=" + topicName + ", numEvents=" + numEvents);
            logger.trace("Creating Kafka Producer...");   
            Properties props = new Properties(); 
            props.put(ProducerConfig.CLIENT_ID_CONFIG, "HelloProducer"); 
            props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092"); 
            props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, IntegerSerializer.class.getName()); 
            props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class.getName()); 
            KafkaProducer<Integer, String> producer = new KafkaProducer<>(props); 
            logger.trace("Start sending messages..."); 
            try { 
                for (int i = 1; i <= numEvents; i++) { 
                    producer.send(new ProducerRecord<> (topicName, i, "Simple Message-" + i)); 
                } 
            } catch (KafkaException e) { 
                logger.error("Exception occurred – Check log for more details.\n" + e.getMessage()); 
                System.exit(-1); 
            } finally { 
                logger.info("Finished HelloProducer – Closing Kafka Producer."); 
                producer.close(); 
            } 
        } 
    }
        
                                                         

  1. The above code is a kind of “Hello World!” of Kafka producer. The code is taken from the examples explained in one of the main chapters of the book and the explanation for the code is covered in the respective chapter.
  2. This simple program takes a String topic name and an. So, the next step is to specify those command line arguments in your IDE.
  3. Go to the run menu and select Edit configurations menu item. Choose the application in the templates. Type test and 0 in the program arguments as shown below.
Enable Auto-Import for Maven Projects
Fig A.4 - Command Line Arguments in IntelliJ IDEA

Once you reach this stage, you are ready to run your application. You can start your Zookeeper and Kafka broker as explained in the earlier section. Create a Kafka topic named test, and then you should be able to execute your Kafka producer application. Follow below steps to run the Kafka producer application from the IDE.

  1. Double click src/main/java/HelloProducer.java file in the project explorer.
Enable Auto-Import for Maven Projects
Fig A.5 - Executing Application in IntelliJ IDEA
  1. Click the green play button next to the line number in the code editor and select Run ‘HelloProducer.main()’ as shown in the above figure. You can also press the CTRL+Shift+F10 while you have the HelloProducer class selected in the project explorer.

The program shows all the LOG4J2 events on your IDE’s run window as shown below.

Enable Auto-Import for Maven Projects
Fig A.6 - Log4J2 Output

The above method to execute your Kafka application is straightforward. However, it requires you to navigate to the Windows Command prompt to start your Zookeeper and Kafka server. You can integrate that task in your IntelliJ IDE as explained in the next section.

Integrate Kafka localhost in the IDE

Starting zookeeper, Kafka broker, command line producer and the consumer is a regular activity for a Kafka developer. However, you need to go back and forth to the Windows Command prompt and leave a bunch of Command Windows open and running. The switching between IDE and command window is often annoying.
You can integrate scripts for all these tasks in your project and manage those activities from the IDE. The method is straightforward and makes your life easy.

  1. Right-click your project home in the project explorer window and select New menu item. Then choose the Directory menu item from the child menu and create a folder named scripts.
  2. Right-click the scripts directory in your project navigation window and select New from the menu item. Then select File from the child menu and create a file named start-zookeeper.cmd
  3. Type the command to start the Zookeeper server in your start-zookeeper.cmd file. A sample command is given below.
                                                                    
    zookeeper-server-start.bat C:\Users\prashant\Downloads\kafka_2.12-2.0.0\config\zookeeper.properties    
                                                             

  1. The IDE will also ask you to install a plugin to support the CMD files as shown in the below figure. You must choose to Install the plugin. The IDE will install the plugin, and you should be prompted to restart IntelliJ IDE to activate the plugin.
IntelliJ CMD plugin
Fig A.7 - IntelliJ CMD plugin
  1. Follow the above steps to add another file as start-kafka-server.cmd and type the appropriate command to start the Kafka server. A sample command is given below.
                                                                        
    kafka-server-start.bat C:\Users\prashant\Downloads\kafka_2.12-2.0.0\config\server.properties    
                                                                 
  1. Similarly, add another file as start-console-consumer.cmd and type the appropriate command to start the console consumer.
                                                                            
    kafka-console-consumer.bat --bootstrap-server localhost:9092 --topic test --from-beginning 
                                                                     
  1. Now you are ready to start Zookeeper server from the IDE. Select the start-zookeeper.cmd and right click. Then select Run ‘start-zookeeper’ from the menu item as shown in the below figure.
Start External Command from IntelliJ IDEA
Fig A.8 - Start External Command from IntelliJ IDEA
  1. The above command should start the Zookeeper server, and the logs should be redirected to the IDE as shown in the below diagram. You can stop the Zookeeper using the red colour stop button in the IDE.
Stop External Command from IntelliJ IDEA
Fig A.9 - Stop External Command from IntelliJ IDEA
  1. Similarly, you can start Kafka server from the IDE.
  2. Now you are ready to begin your Kafka producer from the IDE. Select /main/java/HelloProducer class in the project explorer and press CTRL+Shift+F10.
  3. The HelloProducer application will start and send ten messages to Apache Kafka.
  4. Now you can start the console consumer from your IDE and check the output in the IntelliJ IDE itself.
  5. That’s all. Stop your consumer, then your Kafka server and finally your Zookeeper server.

Summary

In this section, we learned Installing and configuring Maven 3 on your windows 10 machine. Then we learned IntelliJ IDEA installation and created a simple Kafka project. We create a generic POM file to define the most essential dependencies for a typical Kafka project. We concluded this section with integrating command files for Kafka tools and learned to execute them from the IDE. This section enables you to set up a development environment to develop, debug and test your Kafka applications. In the next section, we will add unit testing code to the Kafka producer and learn how to build and deploy your project in a multi-node cluster.

Author : Prashant Pandey -


You will also like:


Dawn of Bigdata

Let us go through the birth of Bigdata.

Learning Journal

Why Messaging System

The main idea of a messaging system, and why and how Kafka implements the same notion.

Learning Journal

Kafka Streams : Real-time Stream Processing

This book helps you understand the stream processing in general and apply that skill to Kafka streams programming.

Learning Journal

HTML5 Editors

The most popular and commonly used HTML5 code editors.

Learning Journal

Scala Functions

Scala is a functional programming language. Functions are the building blocks in Scala.

Learning Journal