What are the basics of building the microservices for our business?



Basics of APIs:  APIs proved the way for businesses to integrate with other systems. The businesses began building and leveraging APIs to allow the users to share and embed content into streams of social media interactions. Many Web and mobile apps that we use today are built on top of other APIs. APIs will perform and return data correctly, quickly, securely, and reliably. 

     API is a set of programming instructions and standards for accessing web-based software applications or services. The developers use APIs to get instructions to interact with services and connect data between different systems. It describes the available functionality of a service, how it will be used and accessed, and what format to access for inputs and outputs. The businesses and applications are built on open APIs to pass data back and forth between systems. The APIs take a set of instructions ( a request) from an application, that request to the database, fetches the data, or facilitates some actions, and returns the response to the source. APIs is usually a portion of microservice.

     APIs can be used within an internal organization or they may be public consumer-facing APIs. Private APIs make businesses more agile, flexible, and powerful. Public APIs help the business to connect to offer integrations and build new partnerships. For ex, the weather company collects weather data from millions of endpoints and sources around the globe. Our desktop always displays the current temperature and weather conditions based on your location. The technology predicts weather patterns, the device sends information about your location to an API and returns the correct data such as the temperature and forecast based on your location. The weather company API responds to simple requests. 

Microservice is an approach to building an application that breaks the functionality into components. It is set of clearly defined methods of communication between various components. It is also be described as contract of actions you request for a particular service.  APIs are developed using a RESTful style. APIs have series of verbs associating with the HTTP actions like,

* POST ( add item to the collection / resources )

* GET (get a single item or collection / resources)

* PUT (edit an item that already exists in a collection / resources )

* DELETE (delete the item in the collection / resources)

These HTTP verbs correlate with the common CRUD capabilities that the applications use today. 

Functions of APIs: If you have an open API that you made available to partners or developers, then you have a responsibility that the API is available and working as expected.

Features to look for when implementing Systems: 

Availability: Is the API endpoint up or it is returning an error

Functionality: Is the API returning the correct data in the right format?

Speed: How quickly is the API returning responses?

Performance: Can we complete the transaction with data from this API. There are a number of key features that you need to provide in order to fully test API transactions.

Request Headers: Request headers can include rules and settings to define how an HTTP transaction should operate. There is a standard set of supported request header types that have specific names and purposes. With an API check, we can set request headers with each request as part of a transaction. Consider a scenario where we need to POST username and password credentials to access some information. Then, once we logged in at that endpoint we need to store and set session ID in order to prepopulate other components specific to this session. These are the steps to be followed for the creation of an API,

1. Make a request to POST a username and password to an endpoint to login

2. Extract the session ID from the response using JSON path and save that ID as the variable that will be reused in future steps

3. Make a request to POST to a different endpoint with the Session ID in the request headers

We can add more functional steps to this transaction and confirm that the session ID is set as expected. 

Handling Authentication: Authentication for an API defines who has permission to access secure data or endpoints. APIs that allow end users to make changes or for companies that charge some cost for accessing data via API. HTTP basic authentication is a standard part of HTTP, and it can be used for API endpoints or any HTTP URL. You can simply send a username and password - encoded together in base64 as part of your request to the API. Another example of direct authentications are API keys or tokens. API keys are just long strings or hexadecimal digits that can be sent instead of username or password to authenticate the access to an API endpoint.

Request/Response Style Application(SOAP, REST, JSON): APIs are about sending a request and getting data back. The web-based APIs happened over HTTP by the request and response. There are many ways to format these requests and responses. Different API formats use different ways of structuring requests and responses. SOAP is always using the POST method in XML, but REST usually uses GET, and POST methods in JSON. Different formats may pass credentials or authentication information in different ways like special HTTP headers, query string parameters, cookies etc.,

Introduction of Microservices: Traditionally, Applications have all the business logic in one piece of software - also known as monolith.  The modern solution is to break the monolith into microservices. It is an independent service from an bigger application.   The microservices-based environment can be built either with an even-driven or request-response approach. Another is a hybrid of the two approaches. When you build microservices in the Apache Kafka TM ecosystem, there are two worlds,

      1. Service-Based Systems

      2. Stream Processing

Microservice architectures were designed to provide flexibility and adaptability to business systems, by splitting the roles into a lightweight, independently deployed services. Simple request response are pieced together for service-based systems. The following fig shows the microservices can split the roles within a system into discrete units.  Each of the services like UI, Stock, and Payment match underlying business functions. The main benefit is that each service is deployed independently and  of others.

Event Driven System: It is interaction through events and not to services. For ex, In Payment Services, We listen (Purchase Requested) and React(Payment Completed) to events. The point is listening and reacting to the events. For ex, Stream processing system used to ingest data from thousands of mobile devices. Each device send JSON message to denote applications on each mobile phone that are being opened, being closed or crashing. The following picture shows the typical streaming application that ingests data from mobile devices into kafka, processes it in a streaming layer, and then push the result to serving layer where it can be queried.

      In an Event-Driven architecture, service raises events. These events typically map to the real-world flow of the business. For ex, the user makes a purchase that defines an event. This in turn triggers a series of downstream services ( payment, fulfillment etc.,). these events flow through a broker (Apacache Kafka). So, the downstream services react to business events rather than being called directly by other services(Request Response approach). The following diagram shows the interaction built using a request-response model and the second shows the event-driven approach.


 Stream processing addresses the problem of continually reacting to and processing data as it flows through a business. The brokered technology changes the dynamic of interaction by decoupling sender and receiver. Another option is to apply the hybrid of the two.

Introduction of Streaming Platform - Kafka:  The core sits a cluster of Kafka brokers. You can 
interact with the cluster through wide range of client APIs using python, REST, Scala and more. There are two APIs for stream processing: Kafka Streams and KSQL. These APIs can be stateful which means they can hold data tables much like a regular database. The third API is connect. This has a whole ecosystem of connectors that interface with different types of database or other endpoints, both to pull data from and push data to Kafka. A streaming platform brings these tools together with the purpose of turning data at rest that flows through an organization.The broker's has ability to scale, store data, and run without interruption for connecting the applications and services across a department or organization.

    CQRS (Command and Query Responsibility Segregation) is an important pattern in microservices. It helps to avoid complex queries and inefficient patterns. This pattern separates read and update operations for the database. The orders, shipments, customers, and payment events form canonical shared events in Kafka. It means it can be processed as an event-driven model. So, it is good to start serving this stream of events.

Fundamentals of developing Client Applications in Confluent Cloud: Confluent cloud is a managed service for Apache Kafka, a distributed streaming platform. It provides the single source of truth across event streams for mission-critical applications. The key benefits of confluent cloud include,

   * Developer Acceleration in building event streaming application

   * Liberation from Operational burden

   * Bridge from on-premises to cloud with hybrid Kafka Service

Once you log into the confluent cloud UI with your email address and password, access the confluent cloud cluster and identify the broker's endpoint via the confluent Cloud CLI command "ccloud kafka cluster describe". Then, you can configure your client application to connect to the confluent cloud cluster using the following parameters

    1. <BROKER ENDPOINT> : bootstrap URL for the cluster

    2. <API KEY>: API key for the user or service account

    3. <API SECRET>: API secret for the user or service account

You can define these parameters directly in the application code or initialize the properties file and pass that file to your application. On the host with your client application, initialize a properties file with configuration to your Confluent Cloud cluster. You may substitute <BROKER ENDPOINT>,  <API KEY> and <API SECRET> to match your kafka cluster endpoint and account credentials. If you have a java client, create a file called $HOME/.ccloud/client.java.config looks like, 

bootstrap.servers = <BROKER ENDPOINT>

sasl.jaas.config = org.apache.kafka.common.security.plain.PlainLoginModule required \

username\ = "<API KEY>" password\="<API SECRET>";

ssl.endpoint.identification.algorithm=https

security.protocol=SASL_SSL

sasl.mechanism=PLAIN

When deploying the application, it is possible to both run on-prem and connect to confluent cloud services as long as there is network connectivity between the two. The best practice is to deploy the application in the same cloud provider region as your confluent cloud cluster.

Comments