tag:blogger.com,1999:blog-29436309789435948382024-03-24T00:18:29.338-07:00Web Application Development | Creative Web Graphic Solutions We help the businesses to succeed online with our web applicationBalamurugan Dhttp://www.blogger.com/profile/13078851936106035408noreply@blogger.comBlogger112125tag:blogger.com,1999:blog-2943630978943594838.post-17044926836895403432023-12-30T05:40:00.000-08:002023-12-30T05:40:08.993-08:00Happy New Year'2024<p></p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjFvetP6nEHY3C78aGAfY6YJhjnotiJmqWoC_EoQa1aUZopD4j2AwTu87vbqj_UtQOs9T8RR3ge_uCMvESAOQkuiAwcXQE7y7ti1X7CcZBT8nrM8x2r-VjjuncCC2X-J2mUwMkFNSKcwaAOTL8bQFDrpEpVnMHFngwc3kLJ_lrA_KToaOyoQxlTJSM5_uc/s1200/newyear2024.JPG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="800" data-original-width="1200" height="365" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjFvetP6nEHY3C78aGAfY6YJhjnotiJmqWoC_EoQa1aUZopD4j2AwTu87vbqj_UtQOs9T8RR3ge_uCMvESAOQkuiAwcXQE7y7ti1X7CcZBT8nrM8x2r-VjjuncCC2X-J2mUwMkFNSKcwaAOTL8bQFDrpEpVnMHFngwc3kLJ_lrA_KToaOyoQxlTJSM5_uc/w647-h365/newyear2024.JPG" width="647" /></a></div><br /> <p></p>Balamurugan Dhttp://www.blogger.com/profile/13078851936106035408noreply@blogger.com0tag:blogger.com,1999:blog-2943630978943594838.post-1569307745311228142023-07-19T00:37:00.002-07:002023-07-19T00:40:59.160-07:00Sales and CRM Template using Coda AI<p> <iframe allowfullscreen="" class="BLOG_video_class" height="371" src="https://www.youtube.com/embed/CyyFvDMmDBo" width="601" youtube-src-id="CyyFvDMmDBo"></iframe></p><br /><p></p>Balamurugan Dhttp://www.blogger.com/profile/13078851936106035408noreply@blogger.com0tag:blogger.com,1999:blog-2943630978943594838.post-2004480697194585012022-10-14T01:54:00.004-07:002023-11-06T23:29:44.536-08:00How to deploy our cloud native applications on Kubernetes?<p><b><span style="color: #800180;"><br /></span></b></p><p><b></b></p><div class="separator" style="clear: both; text-align: center;"><b><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjr6D73-4x1aFH7IoxMNA2pnIPoc1wDrchvW0JcgZ7GM7pxsc28qtfBR9Dt1C1zbnnljjdpKzkszbJJy7bNRajFmci8rz9h7bqgibJ-OS_tgWAtMx9WvO3lRdINF9qhf375xLa6qwUaNWgt1X_LRGMjfKj5-uHQb2gLXnLm1AQuYX3liEu5q_DwqESz/s739/k8.jpg" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="260" data-original-width="739" height="233" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjr6D73-4x1aFH7IoxMNA2pnIPoc1wDrchvW0JcgZ7GM7pxsc28qtfBR9Dt1C1zbnnljjdpKzkszbJJy7bNRajFmci8rz9h7bqgibJ-OS_tgWAtMx9WvO3lRdINF9qhf375xLa6qwUaNWgt1X_LRGMjfKj5-uHQb2gLXnLm1AQuYX3liEu5q_DwqESz/w658-h233/k8.jpg" width="658" /></a></b></div><b><span><p style="color: #800180;"><b><span style="color: #800180;"><br /></span></b></p><span style="color: #ffa400;"> Introduction of Kubernetes:</span></span></b> Kubernetes are originally developed by google and are now an open-source project managed by Cloud Native Computing Foundation. It is compatible with a number of different container technologies including Docker, rkt and CRI-O. Kubernetes is a system for managing containerized applications across a cluster of servers. It allows running multiple copies of an application on different servers while keeping in sync with each other. This ensures that our applications are always available and can handle traffic without any downtime. <p></p><p><span style="color: #ffa400;"><b><span>Basics of Kubernetes:</span></b> </span>Kubernetes is a powerful tool that can help you streamline your application development process. By using Kubernetes, you can easily manage your application's dependencies and configurations. It provides features such as load balancing, service discovery, and health checking. Also, it helps you to automate the deployment and scaling of your applications.,</p><p> Kubernetes clusters refer to the group of machines running on Kubernetes and containers. The cluster must have a master and each cluster contains Kubernetes nodes. Nodes might be physical machines or VMs. Nodes run on Pods, the Kubernetes objects created or managed. Each pod represents a single instance of an application and consists of one or more containers. Kubernetes starts, stops, and replicates all containers in a pod as a group. </p><p><span style="color: #ffa400;"><b>Installation:</b></span> In order to install Kubernetes, you will need the following:</p><p> - A Linux distribution such as Ubuntu 18.04</p><p>- 2 CPU cores</p><p>- 20GB of free disk space</p><p>- At least 4GB of RAM</p><p>To start the installation process, update your system's package list and install the dependencies required to add new repositories and keys. Next, add Google's apt repository to your system. This repository contains the packages required to install Kubernetes. Update your system's local package index and install the kubelet, kubeadm and kubectl packages. These packages are used to configure and manage Kubernetes clusters. Finally, start the kubelet service and enable it to start automatically at boot time.</p><p><b><span style="color: #ffa400;">Kubernetes Architecture:</span></b> Kubernetes is a powerful system for managing containerized applications at scale. Its architecture consists of a master node that manages a group of worker nodes, each of which runs one or more containers. etcd is used to store Kubernetes configuration data in a distributed manner across all nodes in the cluster. It groups containers that make up an application into logical units for easy management and discovery. </p><p><span style="color: #ffa400;"><span>Kubernetes Master Node:</span> </span>The Kubernetes master node is the heart of the Kubernetes cluster. The master node is responsible for managing the cluster and maintaining its state. It provides a unified view of the cluster and exposes the API to end users and clients. The master node also schedules workloads on worker nodes and ensures that they are running as desired.</p><p><span style="color: #ffa400;"><span>Kubernetes Worker Nodes:</span> </span>Worker nodes are the machines in a Kubernetes cluster that actually run the containers. They are managed by the master node and perform the work assigned to them by the master node. Each worker node runs a kubelet process which communicates with the master node and ensures that containers are running as desired. Worker nodes also run a proxy process that provides network connectivity to services running on the worker node.</p><p><span style="color: #ffa400;">ETCD Cluster:</span> etcd is a distributed key-value store that is used by Kubernetes to store its configuration data. The data stored in etcd is replicated across all nodes in the Kubernetes cluster so that if one node fails, the data is still available from another node. ETCD is used by both master and worker nodes in the Kubernetes cluster.</p><p><b><span style="color: #ffa400;">Kubernetes Application Development:</span></b> The first thing you need to do is set up a Kubernetes cluster. You can create cluster using AWS, Goolge Cloud, and Microsoft Azure. Once you have your cluster set up, you will need to install kubectl command line tool.</p><p> Once you have cluster setup and kubectl installed, you need to create a configuration file called kuber-config which will be used to connect kubectl to your Kubernetes cluster. Finally, Connect kubectl to your cluster by running the following command:</p><p> $kubectl --kubeconfig=<path_to_config_file> get nodes</p><p>This will list all the nodes in your cluster. Once you have verified that kubectl is correctly configured and able to communicate with your cluster, you are ready to start using your containerized applications.</p><p>Now, you are ready to start deploying your applications to kubernetes. The first thing you need to do is create a deployment. It is to create of identical pods that are used to provide your application with high availability. To create the deployment, you need to create a deployment resource file. It contains information about your deployments such as the name of the deployment and the number of replicas you want to deploy etc.,</p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjdIB9gIFzWXmY9SuDrur0N_5H7ccTZeGmlYP-aInpAXz8tBE7b5xKl3hPxfWlYSxcNNhzR2lzxkECDdaPt_DUYS6XXLa2TRMMRNsDvxqjj52RgpzBTda8pIvaSOHvQuhYlbi536VsFvxaMWAOxDqXWUHeu8SJS78fKSxWPzEsks-oxTgXd7ZDVbQRl/s596/kubernetes.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="384" data-original-width="596" height="359" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjdIB9gIFzWXmY9SuDrur0N_5H7ccTZeGmlYP-aInpAXz8tBE7b5xKl3hPxfWlYSxcNNhzR2lzxkECDdaPt_DUYS6XXLa2TRMMRNsDvxqjj52RgpzBTda8pIvaSOHvQuhYlbi536VsFvxaMWAOxDqXWUHeu8SJS78fKSxWPzEsks-oxTgXd7ZDVbQRl/w608-h359/kubernetes.png" width="608" /></a></div><div class="separator" style="clear: both; text-align: center;"><br /></div><p><b><span style="color: #ffa400;"> Stateful Web Application:</span></b> It is an application that maintains the state of a user's activity within an application. The stateless web application does not maintain the state of a user's activity within the application. The main benefits of the stateful applications are.</p><p> - Improved performance for users by reducing the need to re-authenticate or re-enter data</p><p> - The ability to scale horizontally without losing data</p><p> - Greater security by encrypting data and storing it in a central location </p><p> - Easy to manage with well-defined APIs.</p><p><b><span style="color: #ffa400;">Kubernetes Application Deployment:</span></b> Kubernetes is a powerful tool that can help you to manage your web application and ensure its availability. By using Kubernetes, you can abstract away the underlying infrastructure and focus on your application code. Also, it can automate many of the tasks that are required to maintain a stateful web application, such as storage provisioning, scaling up or down based on load, rolling out, new versions of the app, and self-healing in the event of failures. It enables you to run your applications in a cloud-native environment with all the benefits that come with it.</p><p> To use Kubernetes, you need to containerize your application using Docker or another containerization tool. It means packaging your application code and dependencies into a single unit that can run isolated from other applications. If you are not familiar with containers, think of them as self-contained units that include everything your application needs to run such as code, libraries, dependencies, and configuration files. Once your application is containerized, you can deploy it to a Kubernetes cluster. It consists of set of nodes that are used to run your applications. These nodes are managed by control plane, which is responsible for maintaining the desired state of the cluster. You can interact with the control plane using kubectl command line tool. </p><p> Kubernetes also provides a rich set of APIs that can be used to automate tasks such as deployments, scaling, and operations. These APIs are exposed via the kube-apiserver and can be accessed using any programming language.</p><p> Cloud-native is a modern application that is built using microservices, which are small, independent services that can be deployed and scaled quickly and easily. Cloud-native apps are also designed to be highly resilient, so that if one service fails the others can continue to run. To run a cloud-native app in Kubernetes, you will need to create a deployment. A deployment is a group of identical pods that are all running the same application. Pods are the basic units of deployment in Kubernetes. </p><p> To create a deployment, you will need to specify the number of replicas you want, as well as the image for your application. Once you have created your deployment, you are ready to <a href="https://docs.docker.com/desktop/kubernetes/" rel="nofollow" target="_blank">deploy your applications on Kubernetes</a>. To do this, you will use the kubectl apply command. This command will take your deployment resource file and create the resources specified in it. Once your application is deployed, Kubernetes will manage it for you. This includes tasks such as rolling out new versions of your application and scaling your application based on demand.,</p><p><b><span style="color: #ffa400;">Works:</span></b> It works by using a control plane to manage a group of nodes or servers. The control plan consists of a number of controllers that work together to ensure that the desired state of the system is always met. For ex, if you want to deploy a new application, the control plane will ensure that the necessary containers are created on the nodes and that they are correctly configured. </p><p> If one of the nodes in the cluster goes down, the control plane takes care of rescheduling the affected container onto other nodes in the cluster so that the application can continue to function correctly. This ensures that your applications are always available, even in the event of hardware failure. Kubernetes also uses a number of services to provide additional functionality such as storage, networking, and monitoring. These services can be provided either by Kubernetes itself or by external providers. </p><p><span style="color: #ffa400;"> Benefits of Integrating Kubernetes:</span> Kubernetes automates the tasks of container management that includes built-in commands for deploying applications. It is a powerful system for managing containerized applications at scale. It provides high availability and fault tolerance so that you don't have to worry about designing your own solutions for these problems. Also, it makes it easy to automate many common tasks such as deployment and scaling so that you can focus on developing your applications rather than managing them. There are many benefits of integrating Kubernetes into your workflow like,</p><p><span style="color: #ffa400;"> - Increased Efficiency:</span> By automating the deployment and management of containerized applications, Kubernetes can help to optimize your use of resources and improve efficiency.</p><p><span style="color: #ffa400;"> - Reduced Costs:</span> By using Kubernetes to manage your containerized applications, you can reduce your infrastructure costs by making better use of your existing resources.</p><p><span style="color: #ffa400;"> - Improved Scalability;</span> Kubernetes makes it easy to scale your containerized applications up or down as needed, so you can always meet the demands of your users.</p><p><span style="color: #ffa400;"><span> - Increased Reliability:</span> </span>With its built-in features for fault tolerance and self-healing, Kubernetes can help you ensure that your applications are always available when you need them.</p><p> </p><p><br /></p>Balamurugan Dhttp://www.blogger.com/profile/13078851936106035408noreply@blogger.com0tag:blogger.com,1999:blog-2943630978943594838.post-90851872247465180282022-09-19T19:11:00.005-07:002022-09-19T19:26:29.584-07:00Interview with Best Startup India <div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjg6sKem9RkOHjpKHQoZmPFqEiR2bPn3AcOXYdKCqtmcoPni4I2LdsumXaGACEfVEhhflTy5cs8QnFQl-o8jeHMPSKTcmZM7gbh9aOoDZ5fKkHkwB1Cu9g5owvBcQkLnd52YoNy6dO4FHF1fBTUhtyKgq1gzq8NzjUX-9APKkbszMvWDRRG5B9b_0dU/s1208/659037C6-06FB-42C8-8189-2779D1DED6AB.jpeg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="578" data-original-width="1208" height="256" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjg6sKem9RkOHjpKHQoZmPFqEiR2bPn3AcOXYdKCqtmcoPni4I2LdsumXaGACEfVEhhflTy5cs8QnFQl-o8jeHMPSKTcmZM7gbh9aOoDZ5fKkHkwB1Cu9g5owvBcQkLnd52YoNy6dO4FHF1fBTUhtyKgq1gzq8NzjUX-9APKkbszMvWDRRG5B9b_0dU/w600-h256/659037C6-06FB-42C8-8189-2779D1DED6AB.jpeg" width="600" /></a></div><div></div><div><br /></div><div><br /></div><div> <a href="https://beststartup.in/balamurugan-d-after-my-graduation-mca-in-1998-i-started-my-career-as-a-software-engineer-in-bangalore/">Interview with Beststartup in India</a></div>Balamurugan Dhttp://www.blogger.com/profile/13078851936106035408noreply@blogger.com0tag:blogger.com,1999:blog-2943630978943594838.post-34218095642385448462022-08-22T05:58:00.015-07:002022-09-06T18:59:55.684-07:00What are the basics of building the microservices for our business?<p><span style="color: #ffa400;"><b><br /></b></span></p><p><span style="color: #ffa400;"></span></p><div class="separator" style="clear: both; text-align: center;"><span style="color: #ffa400;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEibHOPYDGGll26QSXYtFYISQj51MLYYKkchiKjwXApZ2XZkPRONiypDAn0XwLJNHn2mv07pBN552uqA_mTpZuCN2cinNEHeWYSVi8NlDP2ETAsqkqV5xZjY7GN9XhYDgiALnbQ-7Of4NsPDkUD-UuLIgvZopiN-bLGbSchAf6pzhzE-OxUqNw3Pvazg/s704/microservice2.jpg" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="584" data-original-width="704" height="419" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEibHOPYDGGll26QSXYtFYISQj51MLYYKkchiKjwXApZ2XZkPRONiypDAn0XwLJNHn2mv07pBN552uqA_mTpZuCN2cinNEHeWYSVi8NlDP2ETAsqkqV5xZjY7GN9XhYDgiALnbQ-7Of4NsPDkUD-UuLIgvZopiN-bLGbSchAf6pzhzE-OxUqNw3Pvazg/w612-h419/microservice2.jpg" width="612" /></a></span></div><br /><p></p><p><span style="color: #ffa400;"><b>Basics of APIs:</b></span> APIs proved the way for businesses to integrate with other systems. The businesses began building and leveraging APIs to allow the users to share and embed content into streams of social media interactions. Many Web and mobile apps that we use today are built on top of other APIs. APIs will perform and return data correctly, quickly, securely, and reliably. </p><p> API is a set of programming instructions and standards for accessing web-based software applications or services. The developers use APIs to get instructions to interact with services and connect data between different systems. It describes the available functionality of a service, how it will be used and accessed, and what format to access for inputs and outputs. The businesses and applications are built on open APIs to pass data back and forth between systems. The APIs take a set of instructions ( a request) from an application, that request to the database, fetches the data, or facilitates some actions, and returns the response to the source. APIs is usually a portion of microservice.</p><p> APIs can be used within an internal organization or they may be public consumer-facing APIs. Private APIs make businesses more agile, flexible, and powerful. Public APIs help the business to connect to offer integrations and build new partnerships. For ex, the weather company collects weather data from millions of endpoints and sources around the globe. Our desktop always displays the current temperature and weather conditions based on your location. The technology predicts weather patterns, the device sends information about your location to an API and returns the correct data such as the temperature and forecast based on your location. The weather company API responds to simple requests. </p><p>Microservice is an approach to building an application that breaks the functionality into components. It is set of clearly defined methods of communication between various components. It is also be described as contract of actions you request for a particular service. APIs are developed using a RESTful style. APIs have series of verbs associating with the HTTP actions like,</p><p>* POST ( add item to the collection / resources )</p><p>* GET (get a single item or collection / resources)</p><p>* PUT (edit an item that already exists in a collection / resources )</p><p>* DELETE (delete the item in the collection / resources)</p><p>These HTTP verbs correlate with the common CRUD capabilities that the applications use today. </p><p><b><span style="color: #ffa400;">Functions of APIs:</span></b> If you have an open API that you made available to partners or developers, then you have a responsibility that the API is available and working as expected.</p><p>Features to look for when implementing Systems: </p><p><span style="color: #ffa400;">Availability:</span> Is the API endpoint up or it is returning an error</p><p><span style="color: #ffa400;">Functionality:</span> Is the API returning the correct data in the right format?</p><p><span style="color: #ffa400;">Speed:</span> How quickly is the API returning responses?</p><p><span style="color: #ffa400;">Performance:</span> Can we complete the transaction with data from this API. There are a number of key features that you need to provide in order to fully test API transactions.</p><p><b><span style="color: #ffa400;">Request Headers:</span></b> Request headers can include rules and settings to define how an HTTP transaction should operate. There is a standard set of supported request header types that have specific names and purposes. With an API check, we can set request headers with each request as part of a transaction. Consider a scenario where we need to POST username and password credentials to access some information. Then, once we logged in at that endpoint we need to store and set session ID in order to prepopulate other components specific to this session. These are the steps to be followed for the creation of an API,</p><p>1. Make a request to POST a username and password to an endpoint to login</p><p>2. Extract the session ID from the response using JSON path and save that ID as the variable that will be reused in future steps</p><p>3. Make a request to POST to a different endpoint with the Session ID in the request headers</p><p>We can add more functional steps to this transaction and confirm that the session ID is set as expected. </p><p><b><span style="color: #ffa400;">Handling Authentication:</span></b> Authentication for an API defines who has permission to access secure data or endpoints. APIs that allow end users to make changes or for companies that charge some cost for accessing data via API. HTTP basic authentication is a standard part of HTTP, and it can be used for API endpoints or any HTTP URL. You can simply send a username and password - encoded together in base64 as part of your request to the API. Another example of direct authentications are API keys or tokens. API keys are just long strings or hexadecimal digits that can be sent instead of username or password to authenticate the access to an API endpoint.</p><p><b><span style="color: #ffa400;">Request/Response Style Application(SOAP, REST, JSON):</span></b> APIs are about sending a request and getting data back. The web-based APIs happened over HTTP by the request and response. There are many ways to format these requests and responses. Different API formats use different ways of structuring requests and responses. SOAP is always using the POST method in XML, but REST usually uses GET, and POST methods in JSON. Different formats may pass credentials or authentication information in different ways like special HTTP headers, query string parameters, cookies etc.,</p><p><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgUqSbbzhvkSlpRUInljeY8uOhbcH9_zYRMl70039EUISqxD3nPyEaqGVUCtUkxCbNtojOWiACwRlbT-CTiKf_j7Cm4bPFI9nvCIB-CzKLNJgt305rhN9TDn0n_F2t0GPAaInFkGEOK9Rb571_wCnf-UroHB8QDlD8iaBZYgXoDfZM87f8FdZVdcyfZ/s582/plutora.jpg" style="clear: right; display: inline; float: right; margin-bottom: 1em; margin-left: 1em; text-align: center;"><img border="0" data-original-height="323" data-original-width="582" height="174" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgUqSbbzhvkSlpRUInljeY8uOhbcH9_zYRMl70039EUISqxD3nPyEaqGVUCtUkxCbNtojOWiACwRlbT-CTiKf_j7Cm4bPFI9nvCIB-CzKLNJgt305rhN9TDn0n_F2t0GPAaInFkGEOK9Rb571_wCnf-UroHB8QDlD8iaBZYgXoDfZM87f8FdZVdcyfZ/w320-h174/plutora.jpg" width="320" /></a><b><span style="color: #ffa400;">Introduction of Microservices:</span></b> Traditionally, Applications have all the business logic in one piece of software - also known as monolith. The modern solution is to break the monolith into microservices. It is an independent service from an bigger application. The microservices-based environment can be built either with an even-driven or request-response approach. Another is a hybrid of the two approaches. When you build microservices in the Apache Kafka TM ecosystem, there are two worlds,</p><p></p><p> 1. Service-Based Systems</p><p> 2. Stream Processing</p><p><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhSFVipC5gaDnckmM3ZzqORa4ZPNPkwmJrgsXh-iWsB5xCH-F48UhwTdwCyS_JvnWR5LGKwhPrWIouVWBwsJwyb7xW69RJM0C78twq-zAsER6zYWFFtbhgdJBUAIFUJ1jk9j4qO34t2GNjNnS47OHTwykzYDmzYehX53_mDCbAGNsfqE4_8-iGklnu7/s1325/996D21F7-0EA7-4D75-BCDA-EA339AC02084.jpeg" style="clear: right; display: inline; float: right; margin-bottom: 1em; margin-left: 1em; text-align: center;"><img border="0" data-original-height="819" data-original-width="1325" height="198" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhSFVipC5gaDnckmM3ZzqORa4ZPNPkwmJrgsXh-iWsB5xCH-F48UhwTdwCyS_JvnWR5LGKwhPrWIouVWBwsJwyb7xW69RJM0C78twq-zAsER6zYWFFtbhgdJBUAIFUJ1jk9j4qO34t2GNjNnS47OHTwykzYDmzYehX53_mDCbAGNsfqE4_8-iGklnu7/s320/996D21F7-0EA7-4D75-BCDA-EA339AC02084.jpeg" width="320" /></a>Microservice architectures were designed to provide flexibility and adaptability to business systems, by splitting the roles into a lightweight, independently deployed services. Simple request response are pieced together for service-based systems. The following fig shows the microservices can split the roles within a system into discrete units. Each of the services like UI, Stock, and Payment match underlying business functions. The main benefit is that each service is deployed independently and of others.</p><p></p><p><b><span style="color: #ffa400;">Event Driven System:</span></b> It is interaction through events and not to services. For ex, In Payment Services, We listen (Purchase Requested) and React(Payment Completed) to events. The point is listening and reacting to the events. For ex, Stream processing system used to ingest data from thousands of mobile devices. Each device send JSON message to denote applications on each mobile phone that are being opened, being closed or crashing. The following picture shows the typical streaming application that ingests data from mobile devices into kafka, processes it in a streaming layer, and then push the result to serving layer where it can be queried.</p><p><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjihPlglXZUvppWUowlsaFzt332UYE-2zUYW_-6QLjHSR-VB82ghortEwYyNdCwRX-1f_joMeherb9Sj2Xc3bsC3nSapyi3NJ7jD7jzPei8W2rCG22x9N8YintBYks5yUDND3t7bAaKIi_rrPYTCKcJoqO4bGbOiQEZdY1lOOuFqSPQCtj5y83-eswP/s933/kafka.jpg" style="margin-left: 1em; margin-right: 1em; text-align: center;"><img border="0" data-original-height="467" data-original-width="933" height="291" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjihPlglXZUvppWUowlsaFzt332UYE-2zUYW_-6QLjHSR-VB82ghortEwYyNdCwRX-1f_joMeherb9Sj2Xc3bsC3nSapyi3NJ7jD7jzPei8W2rCG22x9N8YintBYks5yUDND3t7bAaKIi_rrPYTCKcJoqO4bGbOiQEZdY1lOOuFqSPQCtj5y83-eswP/w653-h291/kafka.jpg" width="653" /></a></p><p> In an Event-Driven architecture, service raises events. These events typically map to the real-world flow of the business. For ex, the user makes a purchase that defines an event. This in turn triggers a series of downstream services ( payment, fulfillment etc.,). these events flow through a broker (Apacache Kafka). So, the downstream services react to business events rather than being called directly by other services(Request Response approach). The following diagram shows the interaction built using a request-response model and the second shows the event-driven approach.</p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEintrDbayPOlR-9fFKqO0Wpfb2SkBrmEz4BgJiWisCYxcxSG7bqf9ZBINU-iw7qVrKCLg77i1sxSNedJGTzjueqTH9KThkvCPnmQXzSf1dvUj7z06uk6opQgkaRINezhltzjMCpLXfichrcM35u9YrpI7LXriojctT1G73HrocfRQBsLBy_YxRocCph/s680/kafka.jpg" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="211" data-original-width="680" height="221" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEintrDbayPOlR-9fFKqO0Wpfb2SkBrmEz4BgJiWisCYxcxSG7bqf9ZBINU-iw7qVrKCLg77i1sxSNedJGTzjueqTH9KThkvCPnmQXzSf1dvUj7z06uk6opQgkaRINezhltzjMCpLXfichrcM35u9YrpI7LXriojctT1G73HrocfRQBsLBy_YxRocCph/w641-h221/kafka.jpg" width="641" /></a></div><br /><p> Stream processing addresses the problem of continually reacting to and processing data as it flows through a business. The brokered technology changes the dynamic of interaction by decoupling sender and receiver. Another option is to apply the hybrid of the two.</p><div class="separator" style="clear: both; text-align: center;"><b style="text-align: left;"><span style="color: #ffa400;">Introduction of Streaming Platform - Kafka:</span></b><span style="text-align: left;"> The core sits a cluster of Kafka brokers. You can </span></div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgUlD_a7nAk7WKblvPNbTBNVh8WcNEqhfX2ZI5VGXZWVRU7wGWJaYtKhZrG8gwRuUlduz243jENZ16dbHRuKWcCESMlEtzIXtSs-ei1__B10kn_LitmkEMyssV2vueZBZ1BZaer2HT62wKzj-FERHHzCky3_53z5Hn3T3o_YhRRJ5YAyVisJkzeGpQ3/s923/kstream.jpg" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" data-original-height="569" data-original-width="923" height="197" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgUlD_a7nAk7WKblvPNbTBNVh8WcNEqhfX2ZI5VGXZWVRU7wGWJaYtKhZrG8gwRuUlduz243jENZ16dbHRuKWcCESMlEtzIXtSs-ei1__B10kn_LitmkEMyssV2vueZBZ1BZaer2HT62wKzj-FERHHzCky3_53z5Hn3T3o_YhRRJ5YAyVisJkzeGpQ3/w299-h197/kstream.jpg" width="299" /></a></div>interact with the cluster through wide range of client APIs using python, REST, Scala and more. There are two APIs for stream processing: Kafka Streams and KSQL. These APIs can be stateful which means they can hold data tables much like a regular database. The third API is connect. This has a whole ecosystem of connectors that interface with different types of database or other endpoints, both to pull data from and push data to Kafka. A streaming platform brings these tools together with the purpose of turning data at rest that flows through an organization.The broker's has ability to scale, store data, and run without interruption for connecting the applications and services across a department or organization.<div class="separator" style="clear: both; text-align: center;"><span style="text-align: left;"><br /></span></div><div class="separator" style="clear: both; text-align: justify;"><span style="text-align: left;"> <a href="https://dzone.com/articles/microservices-with-cqrs-and-event-sourcing#:~:text=CQRS%20is%20another%20design%20pattern,acts%20as%20a%20query%20layer." target="_blank">CQRS (Command and Query Responsibility Segregation)</a> is an important pattern in microservices. It helps to avoid complex queries and inefficient patterns. This pattern separates read and update operations for the database. The orders, shipments, customers, and payment events form canonical shared events in Kafka. It means it can be processed as an event-driven model. So, it is good to start serving this stream of events.</span></div><p style="text-align: justify;"><b style="text-align: left;"><span style="color: #ffa400;">Fundamentals of developing Client Applications in Confluent Cloud:</span></b><span style="text-align: left;"> </span><a href="https://www.confluent.io/" target="_blank">Confluent cloud </a>is a managed service for Apache Kafka, a distributed streaming platform. It provides the single source of truth across event streams for mission-critical applications. The key benefits of confluent cloud include,</p><p> * Developer Acceleration in building event streaming application</p><p> * Liberation from Operational burden</p><p> * Bridge from on-premises to cloud with hybrid Kafka Service</p><p>Once you log into the confluent cloud UI with your email address and password, access the confluent cloud cluster and identify the broker's endpoint via the confluent Cloud CLI command "ccloud kafka cluster describe". Then, you can configure your client application to connect to the confluent cloud cluster using the following parameters</p><p> 1. <BROKER ENDPOINT> : bootstrap URL for the cluster</p><p> 2. <API KEY>: API key for the user or service account</p><p> 3. <API SECRET>: API secret for the user or service account</p><p>You can define these parameters directly in the application code or initialize the properties file and pass that file to your application. On the host with your client application, initialize a properties file with configuration to your Confluent Cloud cluster. You may substitute <BROKER ENDPOINT>, <API KEY> and <API SECRET> to match your kafka cluster endpoint and account credentials. If you have a java client, create a file called $HOME/.ccloud/client.java.config looks like, </p><p>bootstrap.servers = <BROKER ENDPOINT></p><p>sasl.jaas.config = org.apache.kafka.common.security.plain.PlainLoginModule required \</p><p>username\ = "<API KEY>" password\="<API SECRET>";</p><p>ssl.endpoint.identification.algorithm=https</p><p>security.protocol=SASL_SSL</p><p>sasl.mechanism=PLAIN</p><p>When deploying the application, it is possible to both run on-prem and connect to confluent cloud services as long as there is network connectivity between the two. The best practice is to deploy the application in the same cloud provider region as your confluent cloud cluster.</p>Balamurugan Dhttp://www.blogger.com/profile/13078851936106035408noreply@blogger.com0tag:blogger.com,1999:blog-2943630978943594838.post-43172912569753105302022-07-27T01:03:00.001-07:002022-07-27T02:31:44.858-07:00AI Powered Customer Persona Creation & Service<br /><div class="separator" style="clear: both; text-align: center;"><iframe allowfullscreen="" class="BLOG_video_class" height="406" src="https://www.youtube.com/embed/--EFRpOSy_8" width="632" youtube-src-id="--EFRpOSy_8"></iframe></div><br />Balamurugan Dhttp://www.blogger.com/profile/13078851936106035408noreply@blogger.com0tag:blogger.com,1999:blog-2943630978943594838.post-25189224165229347512022-06-11T08:16:00.009-07:002022-06-23T01:25:43.995-07:00How to improve organization Resilience and Disaster Recovery with Azure?<p> </p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjJ1slmnwmD5AneCC8f7WIBDc2fOWY2S1hzYnSG-93VePNQ915FiF4L6Lp7mdVYdV0A0mecdPXmo7ammITHRH8P-G9vEWiMeAogE5NKlESa0T7Lx--n4BmGdAOj1SxItJ65KgzypcZcYhBe96b97pC2BlFZPbuofm-6cU2qf8j4TwugIneDJmKCXZuk/s795/DR-Plan.jpg" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="795" data-original-width="774" height="386" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjJ1slmnwmD5AneCC8f7WIBDc2fOWY2S1hzYnSG-93VePNQ915FiF4L6Lp7mdVYdV0A0mecdPXmo7ammITHRH8P-G9vEWiMeAogE5NKlESa0T7Lx--n4BmGdAOj1SxItJ65KgzypcZcYhBe96b97pC2BlFZPbuofm-6cU2qf8j4TwugIneDJmKCXZuk/w632-h386/DR-Plan.jpg" width="632" /></a></div><br /> <p></p><p> High Availability of infrastructure design between on-premise and cloud can keep the IT operations up and running. But the people have been displaced from their usual workplace or lost to their usual devices or systems. Disrupted partner relationships and supply chain can delay the time to market and weaken the competitive advantage. An inadequate response can harm the company's image and the confidence of the customers and investors. If the people can't do their job, the business can not function. A successful business continuity program requires executives to play an active role in developing the plan and ensure the buy-in from company leadership. The top consideration for a business continuity plan is the development of a clear decision-making hierarchy. The key members of the business continuity team involve in planning and testing throughout the year and ensure the plan is effective and up to date under the pressure of an actual emergency. The data protection costs you money, instead of it saving it as,</p><p><span style="color: #ffa400;"> 1. More Time is equal to More Expensive:</span> The complexity of containerized environments makes the time-consuming task of manipulating raw data and that costs you very high. The extensive recovery time results in a loss of revenue.</p><p><span style="color: #ffa400;"> 2. Manual Process Takes More Resources: </span>The more developers or staff take on manual tasks and protecting the entire environment and applications can be difficult.</p><p><span style="color: #ffa400;"> 3. Customer Loss: </span>You may lose the opportunity to bring a new customer or risk the current customer during the outages.</p><p><span style="color: #ffa400;"> 4. Ransomware Protection: </span>The separate solution for ransomware protection stuck you paying on top of the one you already have.</p><p> The SaaS-based disaster recovery can automate the replication that asynchronously replicates workloads in the on-premise environment by One-Click Orchestrated recovery solutions and seamless integration with Azure services. Disaster recovery is a set of procedures that allow a company to recover IT infrastructure, corporate resources, and employee devices in the event of an unplanned disruption. The disasters can be a natural events or man-made accidents. As a component of business continuity, DR ensures that critical technology remains available or is restored quickly. The strategy focuses on the restoration of hardware, applications, and data to minimize the impact of a negative event. The solutions are used to bring important systems back online, replicate critical data, and replace lost or inaccessible devices. The disaster recovery as a service (DRaaS) solutions ensures the company can continue to operate in the event of an emergency or failure. Disaster recovery plans include several essential elements like,</p><p><span style="color: #ffa400;"> * Inventory of Assets: </span>It is a prioritized list of company equipment and services on day-to-day operations, physical hardware, and digital assets so that the important services and systems can be recovered fast. The recovery time objective (RTO) is the maximum amount of downtime a business can ensure before file recovery needs to take place. </p><p><span style="color: #ffa400;">* Roles: </span>By assigning the roles, companies can prevent confusion for carrying out the various portions of the plan in the event of an emergency. It is the development of a clear decision-making hierarchy that has a set of responsibilities to carry out the disaster occurs with backup personnel the needs are covered.</p><p><span style="color: #ffa400;">* Contingency Plans: </span>It depends on the disaster that will impact the business systems and data protection. So, we should include the different procedures for various events that could occur like the power outage, electrical fires, severe weather etc.,</p><p><span style="color: #ffa400;">* Formal Review Process:</span> The disaster recovery planning to be effective, it should be handled on an ongoing basis that includes regular testing. Failing to regularly test the plan can put the company at risk of having outdated policies and procedures that are not relevant to current operations or that don't require during the disaster. For ex, the software updates would be the reason to update the plan as new vendor.</p><p><span style="color: #ffa400;">Replicate Failover in Azure: </span> Performing the failover is part of our Business Continuity and Disaster Recovery(BCDR) strategy. BCDR strategy replicates our on-premise to Azure on an ongoing basis. The users can access the workloads and apps on the on-premises source machines. If there is an outage that occurs on-premises and you fail the replicating machines over to Azure. Azure VMs are created with the replicated data. The users can continue accessing apps on Azure VMs for business continuity. Failover can be performed by,</p><p> 1. Failover that creates and brings Azure VMs to the selected recovery point </p><p> 2. After Failover verifies the VM in Azure and <span style="color: #800180;">commits</span> the failover to the selected recovery point or commit a different point.</p><p> Failover in Azure Site Recovery has the following stages</p><p><span style="color: #ffa400;">Stage 1:</span> Failover from on-premises after setting up replication to Azure for on-premises machines. When your on-premises site goes down, you fail those machines over to Azure.</p><p><span style="color: #ffa400;">Stage 2:</span> Reprotecting Azure VM for replicating back to the on-premises site. The on-premise VM is set off at reproduction and ensures data consistency.</p><p><span style="color: #ffa400;">Stage 3: </span>When the on-premise site is running as normal that specifies the failover from Azure has been done successfully and you can run another failover. The failback is an Azure VMs to our on-premise site. Now, you can fail back to the original location or to an alternate location.</p><p><span style="color: #ffa400;">Step 4:</span> Finally, reprotecting on-premises machines after failing back and enables the replication of on-premises machines to Azure.</p><p><span style="color: #ffa400;">Recover WorkLoads:</span> The organizations have been operating on infrastructure running in-house, there is an opportunity to migrate these workloads to Azure which saves the costs and provides space for these servers. Azure Site Recovery offers different options depending on the type of workload migration (physical or virtual). Azure Site Recovery provides a way to bring your servers into Azure while allowing them to be failed back to your on-premises data center as part of business continuity and disaster recovery. The common practice is to make the failover and use ASR to move servers to Azure. These steps are followed to configure Azure resources for<a href="https://www.microsoftpressstore.com/articles/article.aspx?p=2992604" target="_blank"> migrating existing servers to Azure</a> and configure components of Site Recovery.</p><p><span style="color: #ffa400;">Containers:</span> Azure provides cloud-based workloads including:</p><p> * Azure Kubernetes Services (AKS)</p><p> * Azure container Instances (ACI)</p><p> * Azure App Service</p><p> * Azure Container Registry (ACR)</p><p> Kubernetes service uses VM scale sets to protect your workloads from node failures. It is also important to segregate the process of recovering applications and data. The Azure storage solutions like disks and file shares create persistent volumes for applications hosted in containers and protect the data using Azure Backup. ACR's geolocation feature allows you to container images from the secondary regions when the primary endpoint goes down due to a regional outage.</p><p> ACI is a managed service that allows you to run containers on the Microsoft Azure public cloud without requiring the use of VMs. ACI provides basic capabilities for managing containers on a host machine. ACI layered approach, performing the management functions needed to run a single container. The orchestrators can manage activities related to multiple containers. Because the container instance's infrastructure is managed by Azure, the orchestrator doesn't find the right host to run a single container. The elasticity of the cloud ensures hosts are always available. For applications that experience fluctuations, you need to scale up the virtual machines in the cluster and deploy containers on those machines. ACI makes things simpler, by letting the orchestrator deploy new containers directly on ACI and terminate them when no longer needed.</p><p> The Azure App Service provides multi-region deployment as the best way to minimize application downtime. It also provides a backup and restores feature that automatically creates a backup of your application configuration, file content, and databases connected to the app. If there is a regional outage, applications hosted in the Azure App service will be placed in DR mode. </p><p> For serverless apps like Azure functions and microservices-based deployment, it is best to separate the configuration from the code in cloud-scale deployments. The Azure App configuration can store configuration information that can be accessed during runtime. It also fast-tracks the redeployment process of applications during a disaster.</p><p><span style="color: #ffa400;">Deploy the Resources in Azure:</span> Azure provides a way to deploy and manage VM and other resources. Azure resource manager is a deployment and management for Azure to manage resources using declarative JSON templates rather than scripts. With Azure resource manager, you can customize resource deployment using parameters, access controls, and more templates for any scenario you need to. To learn about resource group deployments, see <a href="https://docs.microsoft.com/en-us/azure/templates/" target="_blank">Bicep or ARM template</a></p><p> An enterprise-grade solution can speed up your recovery time. These are the things to look for</p><p><span style="color: #ffa400;"> * Click-Driven DR plans: </span>Setting up the disaster recovery policies or custom policies for each individual application using built-in, click-driven workflows.</p><p><span style="color: #ffa400;"> * Continuous Backup:</span> It is essential to look for a solution to limit the data loss that works around the clock</p><p><span style="color: #ffa400;"> * Multi-Tenant, Agentless and Self-Service:</span> This solution saves time, money, and resources to restore your entire environment autonomously and efficiently.</p><p><span style="color: #ffa400;"> * Storage and Cloud Agnostic: </span>Back up the point-in-time to any storage or cloud so that you are not dependent on the production environment copy.</p><p><span style="color: #ffa400;">Website recovery from Azure:</span> Azure site recovery service contributes to business continuity and disaster recovery strategy by keeping the business application online during the outages. Azure site recovery manages on-premise machines and virtual machines (VM) including replication, failover, and recovery. It is a cloud-based DaaS in the event of planned and unplanned outages. It helps to ensure business continuity by keeping business apps and workloads during outages. It replicates workloads running on physical and VMs from a primary site to a secondary location. Azure Site Recovery can be used in cloud and hybrid cloud architectures. The data replication process makes sure copies are in sync and ASR ensures that the data is in usable data after the failover. ASR support for multiple scenarios like</p><p> * Replication of Physical servers from on-premises to Azure</p><p> * Windows and Linux VMs hosted in VMware and Hyper-V to Azure</p><p> * Windows VMs hosted in AWS to Azure</p><p> * Windows and Linux VMs in Azure to Azure</p><p>The replicated data is stored in Azure Storage which is resilient. ASR supports the protection of Windows and Linux Workloads hosted on physical servers on-premises, VMs hosted in VMware/Hyper-V, and third-party hosting platforms/cloud. The Azure ASR console provides a unified view of the replication status of different workloads and allows to carry out maintenance tasks such as tweaking plans. ASR supports replication frequencies as low as 30 sec and can be tailored to meet organization RPO and RTO targets. By integrating automation runbooks and the Traffic manager, the RTO can further be reduced. This tutorial will be useful for integrating or creating the <a href="https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/what-is-application-management">cloud enabled SaaS applications with Azure Active Directory</a>.</p><p> </p><p><br /></p><p><br /></p>Balamurugan Dhttp://www.blogger.com/profile/13078851936106035408noreply@blogger.com0tag:blogger.com,1999:blog-2943630978943594838.post-46448825582248544712022-02-10T01:54:00.014-08:002022-02-11T23:25:25.840-08:00What are the basics of building machine learning models for our clients?<div class="separator" style="clear: both; text-align: center;"><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/a/AVvXsEjwrDR8LPgvq9GwGlPZuejBcHDqOChKmFIr4aOjaJ99PkUq06OdVRMkUW9MPaPrQyEUt-b0ER-H8lOUcOV0kBC7jjqlYtkCdQXfa4wcxvg0oQ87hRVY9qPPEYFz4XjK6fHNv6ZwMjwyWzlyxAxBQhJ6UkvRPiB_dKEgwW1Ra9vWTmhUarz1KvEEPB4F=s1200" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="600" data-original-width="1200" height="404" src="https://blogger.googleusercontent.com/img/a/AVvXsEjwrDR8LPgvq9GwGlPZuejBcHDqOChKmFIr4aOjaJ99PkUq06OdVRMkUW9MPaPrQyEUt-b0ER-H8lOUcOV0kBC7jjqlYtkCdQXfa4wcxvg0oQ87hRVY9qPPEYFz4XjK6fHNv6ZwMjwyWzlyxAxBQhJ6UkvRPiB_dKEgwW1Ra9vWTmhUarz1KvEEPB4F=w661-h404" width="661" /></a></div><br /><div class="separator" style="clear: both; text-align: center;"><b style="text-align: left;"><span style="color: #800180;"> </span></b></div><div class="separator" style="clear: both; text-align: justify;"><span style="text-align: left;"><b><span style="color: #800180;">Process of Building End to End Machine Learning PipeLine:</span></b> In the Research environment, the data scientist investigates and develops the model that generates the business value. The typical ML pipeline involves gathering data from different data sources. It can be described as,</span></div><div class="separator" style="clear: both; text-align: center;"><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/a/AVvXsEhIEgrGBeaYWbOwQISkWbu2F_VhLvSQbuk7ezUZvNaOjjJOutpAh5WygKEOe7Yune3U5zwZ8zXUhzjV4K0z0D9lWzFSdaEV7MkzgXnRrw_pEz4rHwb3T1VvpBhM13nxA0ngNFg5W9XaDHq0OAZ5Ui8mVeHM8Yz4dgR5C4U4nxHFhEreKm_MZMOJjAse=s1895" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="445" data-original-width="1895" height="151" src="https://blogger.googleusercontent.com/img/a/AVvXsEhIEgrGBeaYWbOwQISkWbu2F_VhLvSQbuk7ezUZvNaOjjJOutpAh5WygKEOe7Yune3U5zwZ8zXUhzjV4K0z0D9lWzFSdaEV7MkzgXnRrw_pEz4rHwb3T1VvpBhM13nxA0ngNFg5W9XaDHq0OAZ5Ui8mVeHM8Yz4dgR5C4U4nxHFhEreKm_MZMOJjAse=w642-h151" width="642" /></a></div><br /></div><div class="separator" style="clear: both; text-align: justify;"><span style="text-align: left;"><b><span style="color: #800180;">In data analysis:</span></b> We understand the data, the variables to use, how they have related to each other, regulations in the business, what we want to predict etc., We transform these data into various forms like creating or extracting the new feature. </span></div><div class="separator" style="clear: both; text-align: justify;"><span style="text-align: left;"><br /></span></div><div class="separator" style="clear: both; text-align: justify;"><span style="text-align: left;"><b><span style="color: #800180;">In Feature Engineering: </span></b> We make the variables are ready for the ML model. There are a variety of problems in our data set. It involves missing values, categorical variables, distribution, outliers etc., The missing data is the absence of values for certain observations within a variable. It affects all ML models. So, we need to prepare some sort of number to use ML. The second problem is categorical variables may have labels. This problem has 3 types,</span></div><p><span style="color: #800180;"> * Cardinality:</span> The variables with big numbers dominate the variables with smaller number categories. It is true at tree-based algorithms.</p><p><span style="color: #800180;"> * Rare Labels:</span> It represents the operational problems. Because they are rare. These rare labels appear in the training data set or test data set. So, if it is present in your observation, you may need to add additional steps in model deployment.</p><p> * The <span style="color: #800180;">categorical variable</span> may have the presence of string, but we need numbers in ML When using the Scikit-learn models.</p><p> The third consideration is the <span style="color: #800180;">distribution of the variable</span> for the numerical variable. Specifically, it is a gaussian distribution or skewed, then selecting the feature that the ML can use.</p><p> <span style="color: #800180;">Outliers</span> are usual or unexpected values in a variable that are extremely high or extremely low compared to all other variables. The magnitude of the features affects the model performance. For ex, In the house price prediction, one variable is the area in terms of sq.km and the other variable is the no. of rooms that vary from 1 to 10. In a Linear model, the variable that takes a higher value will have the predominant role over the house pricing. In this case, the area variable is more important to determine the price of the house, however, no. of rooms also place an important role. So, the algorithms are sensitive to scale. The algorithms that are sensitive to the magnitude are,</p><p> * Linear and Logistic Regression</p><p> * Neural Networks</p><p> * Support Vector Machines</p><p> * KNN</p><p> * K-means clustering</p><p> * Linear Discriminant Analysis(LDA)</p><p> * Principal Component Analysis(PCA)</p><p><b><span style="color: #800180;"> Solutions for Feature Engineering Problems:</span></b> A variety of techniques is used to resolve feature engineering problems. In case of missing data, categorical labels, distributions, and outliers, we need to perform the transformation of variables.</p><p> * In missing data imputation, we can use mean/median imputation or arbitrary value imputation. </p><p> * In case of categorical variables, add the missing category or missing indicator. </p><p> * In case of a distribution, there are 2 approaches. The mathematical transformations for skewed to Gausssian, and the variable transformations are logarithmic, reciprocal, exponential etc., </p><p> * In Outliers, we need to perform discretization or truncation in the dataset. For Discretisation in an unsupervised model, the equal-width, equal-frequency, k means techniques are used. Similarly, Decision trees are used in the supervised model. </p><p> If the data are an image, text, time series, and distance, we need to extract the new or create features from these data to feed to our models. In case of text, we need to count the characters, words, lexical diversity, sentences, paragraphs, TF-IDF features. So, these are the transformations used before transfer to ML model.</p><p><b><span style="color: #800180;">Feature Selection:</span></b> Feature selection means finding those variables that are the most predictive ones and building the models using those variables for the entire dataset. This is the process to identify the predictive feature. So, we select the features of,</p><p> * Simple models are easier to interpret by the consumers. It means models built with 10 features are easier to understand than models built with 100 features. Also, It reduces the risk of data errors and data redundancy</p><p> * Shorter Training Times</p><p> * Models built with fewer features are easier to implement or integrate with business systems by software developers and easier to put into production. Because of smaller JSON messages sent to/from with business systems and models. JSON messages contain the variables or inputs to the model and take the prediction out to the system. Also, it has less code and handles potential errors.</p><p><b><span style="color: #800180;">Feature Selection Methods:</span></b></p><p><b><span style="color: #800180;"> * Filter Method:</span></b> It is a simple statistical method. These methods are independent of the ML algorihms to build at the end. It is based only on the variable characteristics. So, it has the advantage of quick removal of junk of features(particularly the loss of variable) and it is model agnostic. It means the selective features are suitable for any algorithms and fast computation. But, it has the disadvantage that the model does not capture redundancy and feature interaction.</p><p><b><span style="color: #800180;"> * Wrapper Method:</span></b> It takes the algorithm intended to build when selecting the feature. It does not evaluate one feature at a time but evaluates the group of features. The advantage of this method is that considers the feature interaction. But it has the disadvantage of this method is not a model agnostic.</p><p><b><span style="color: #800180;"> * Embedded Method:</span></b> It is a feature selection procedure during the training of ML algorithms. The loss of regression is an example of this method. It captures the feature interaction and has good model performance. It is faster than the wrapper method but it is not model agnostic.</p><p> Finally, In the <b><span style="color: #800180;">Model Building</span></b>, we check the various algorithms and analyze the performance to choose the one that produces the best results. We evaluate the statistical metrics like mean square error for regression, accuracy etc., In business, We need to evaluate the statistical metrics relying on the business value. For ex, if we build a model for advertising, we can evaluate the model that bring the no. of new customers while using the model. There are several models can be built like,</p><p> * Linear Models or Logistic Regression (MARS)</p><p> * Tree Models or Random Forests(Gradient Boosted Trees) </p><p> * Neural Networks for a supervised model</p><p> * Clustering Algorithm</p><p>The metrics used to measure the performance of the models are ROC-AUC for the probability value of how many times the model made a good assessment and wrong assessment. The mean square error or Root mean square error are used for linear models.</p><p><b><span style="color: #800180;">Programming in Machine Learning Models:</span></b> In the ML pipeline, we used to learn and transform the data or learn and predict something from the data. In the different steps of procedural programming, we used to learn the parameters from data and use those parameters to make the transformations or predictions. The data stores the parameters like mean values, regression coefficient, or lasso regression models. In object-oriented programming, the classes are used and write code in the form of objects for feature engineering. These objects can store data(or attributes or properties) and instructions or procedures(methods) to modify that data or obtain the predictions. In ML model, the objects can learn and store parameters and are automatically refreshed every time the model is retrained. It has Fit and Transforms methods. The Fit method learns the parameters and the Transform method transforms data with the learned parameters.</p><p> Once we are satisfied with the result, we can go for <b><span style="color: #800180;">Model Deployment</span></b>. The deployment of the model means the model in the cloud should take the data that transforms the variables and select variables, using the trained model and obtain the prediction. So, our in-house software will be versioned for each ML system that is tested and installed across the systems for the expected outputs so that it will minimize the deployment time. When you deploy the CNN that was trained on a large dataset and that data makes a prediction. Normally, Big data are used for analytics and ML. It is a large volume of data to be structured, semi-structured, or unstructured. In business, big data is a text that has large documents or images or complex Time Series that need to be resolved. So, we build the deep learning model with various layers and classes of parameters to be determined. It normally relies on GPUs and neural networks that can be extremely heavy to compute.</p><p><b><span style="color: #800180;">ML System Architecture:</span></b> There are 4 architecture approaches for ML systems. Those are</p><p> 1. Model Embedded in an application</p><p> 2. Served via Dedicated Service</p><p> 3. Model published as data(Streaming)</p><p> 4. Offline Prediction</p><p> When you serve the ML models, there are different file formats. So, you need to serialize the model object with a python pickle module. Open-source tools like MLFlow provides the common serialization format for exporting Spark, Scikit-learn, and TensorFlow models. There is also language agnostic formats to share the models.</p><p><span style="color: #800180;"><b> In the Embedded Approach</b></span>, the trained model is embedded as the dependency of our application. To ensure, we install the application using the pip install <mymodel> or the trained model into the build time by file storage like AWS S3. So, the application can able to make predictions on the fly. For ex, In a single Django application, the html page has the form and when you submit the form, the Django app takes the input to produce the prediction. In the case of a mobile app, it has the native ML app library and performs the prediction on the device.</p><p><span style="color: #800180;"><b> In the Dedicated Service Approach</b>,</span> the trained model architecture has a separate ML service. For ex, when the form is submitted by the Django app that server makes the second call to the dedicated microservice. This microservice is responsible for the prediction. So, Django makes the call via REST or gRPC or SOAP message architecture. Here, the model deployment is separate from the main application. </p><p><span style="color: #800180;">Serve the Model via REST API:</span> The model prediction is done through the REST API. It is very simple to put a server that will transfer to the client a representation of the state of a requested resource. In our case, the requested resource is the prediction from the model to the client. The client could be the website or mobile device. REST API has the potential to combine multiple models at different API endpoints. It is easy to scale by adding more instances of the API application behind a load balancer as well.</p><p><b> <span style="color: #800180;">The </span><span style="color: #800180;">Model Published as Data</span></b> approach leveraging the streaming platform like Apache Kafka or Apache Pulsar. In this process, the training process publishes the model through a streaming platform. The application consumes the model at runtime and enables any model update. For ex, the Django application can consume the dedicated Kafka topic where the new version of the models will be published. Here, there is no need to deploy and seamlessly upgrade the models.</p><p><b><span style="color: #800180;"> The <span>Offline Predictions</span></span></b> is a asynchronous design. The Predictions are triggered and run asynchronously via application or scheduled job. After a few hours or days, the predictions are collected into the database or some form of storage via dashboard or reports. So there is a trade-off between the systems and requirements for building the ML architecture.</p><p><br /></p><p><br /></p><p><br /></p><p><br /></p><br /><div class="separator" style="clear: both; text-align: center;"><br /></div><br /><div class="separator" style="clear: both; text-align: center;"><br /></div><br />Balamurugan Dhttp://www.blogger.com/profile/13078851936106035408noreply@blogger.com0tag:blogger.com,1999:blog-2943630978943594838.post-79280806525875242492021-09-26T22:51:00.023-07:002021-10-03T19:14:28.286-07:00What is agile workforce performance management and estimation ?<div><span style="color: #800180;"><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgFOsW3ZbFev8XuMQvGtPytZeJqjVtEaMw2-s_wuFgd8zU0f5CXlv3zfzcC-wwIazZdA1Qdl_5pJCVTrqCh9bWWGQR2gFpTy3P4MA0QvOBFpIC-Pr5OZJKcNp8m8UVcw2rEXpBi1oOqmis/s1147/agile-work.jpg" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="400" data-original-width="1147" height="335" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgFOsW3ZbFev8XuMQvGtPytZeJqjVtEaMw2-s_wuFgd8zU0f5CXlv3zfzcC-wwIazZdA1Qdl_5pJCVTrqCh9bWWGQR2gFpTy3P4MA0QvOBFpIC-Pr5OZJKcNp8m8UVcw2rEXpBi1oOqmis/w659-h335/agile-work.jpg" width="659" /></a></div><b><div><span style="color: #800180;"><b><br /></b></span></div>Fundamentals of Agile Software Development: </b></span></div><div> Agile methods are based on an iterative and incremental development approach where the requirements and solutions evolve through collaboration between self-organizing and cross-functional teams. It promotes adaptive planning, evolutionary development, and delivery. The time-boxed iterative approach that encourages rapid and flexible response to change. The iterative development is the way of breaking down the large application into smaller chunks. Software is designed, developed and tested in repeated cycles. In an Incremental software development method, the product is designed, implemented, and tested incrementally until the product is finished. It has the following benefits,</div><div> * Accelerate the Time to Market,</div><div> * Increase the project visibility by having multiple iterations of end to end development cycle rather than one iteration, </div><div> * Agile method welcomes evolving requirements and prioritizes as needed,</div><div> * Reduce risk and cost by testing early and identify potential issues in the project </div><div> * It improves collaboration with Business Organization </div><div><br /></div><div> <b><span style="color: #800180;">Agile Planning: </span></b>Agile planning in the value-based analysis is the process of assessing and prioritizing the work items and plan accordingly. We can check at every stage of the project by business value of the items or practice and items in this set have the highest business value so that we can prioritize and deliver the highest value item first. The item value can be calculated by the development and delivery cost. For ex, the feature that only cost $1000 that can delivers $3000 in value is worth developing, which has higher business value. The agile team will create a high-level estimate of the backlog items when evaluating the work items in the project. The other factors to be considered are payback frequency (the time period to develop the project like one month or one week) and dependencies( compliance check). </div><div> The team elicits from the stakeholder group ranks the requirement and prioritizes the requirement into the development process. The first step in the process is to define the vision. It's called "designing the product box". In this activity, they design an imaginary product and identify the product's top 3 features. This will help the stakeholder to understand the high-level vision of the project and consensus around common goals, mission, and success criteria. In the final step, the team completes the iterative development process that has a prioritized list based on business value. </div><div> Agile planning uses timebox method. It has a agreed period during which the person or team works towards the completion of the goal, rather than work to continue until the goal is reached. It's a method to evaluate what was accomplished within the timeline and stopping the work. In agile, all the events are timeboxed. For ex, the stand-up meetings are time-boxed for 15 minutes and sprints are timeboxed to 1 to 4 weeks etc., This will help the team to adjust their scope and deliver the best quality product within the fixed cost and time frame. </div><div> The agile planning process is conducted through 3 steps such as,</div><div><br /></div><div> Sizing -----> Estimating -----> Planning </div><div><br /></div><div> As shown above, we need to determine how large the task is and estimate how quickly able to complete it. So, we need to break down large chunks of work into smaller units( size, estimate, and plan). Agile planning differs from traditional planning in the following ways, </div><div><b> <span style="color: #800180;"> 1. Uncovers true requirement and pre-planning</span> </b>- Agile project build a prototype to better understand the domain and use this prototype as a basis for further planning and elaboration. It relies on trial and demonstration to uncover the true requirement of the project. </div><div> <b><span style="color: #800180;">2. Agile planning is less of an upfront effort</span></b> - Agile methods recognize the level of risk and uncertainty on knowledge work projects makes upfront planning problematic. So they distribute the planning effort throughout the life cycle which will help the team better adjust emerging information. Agile methods accept the knowledge work projects and planning becomes a visible and iterative component of the project life cycle. Agile methods use a sophisticated adoption system to gather and measured in the same units as feature estimates of story points, days or hours and make an adjustment in the backlog. The 3 key levels of agile planning are High-Level plan, Release plan, and Iteration plan. Agile teams will get a lot of feedback in an ongoing planning processes. In an agile approach, you need to update the planning at, </div><div><br /></div><div> * Backlog Reprioritization </div><div> * Feedback from iteration demonstration </div><div> * Retrospective generates teams processes and techniques</div><div><br /></div><div><span style="color: #800180;"><b> Agile Principles:</b> </span>Agile discovery is the term that refers to the evolution and elaboration of agile that addresses project plans of incomplete initial requirement is in effect. It has nine high-level principles of agile. Those are,</div><div><br /></div><div> <b><span style="color: #800180;">* Plan at Multiple Levels:</span></b> At a high level, we cover the overall scope, what needs to be done for the project, Releases detail, and the iterations etc., </div><div><b><span style="color: #800180;"> * Engage the Team and Customer:</span> </b> It will improve the team's knowledge and technical insights that generate the buy-in and commitment. </div><div><span style="color: #800180;"><b> * Manage Expectation by Frequent Demonstration process: </b></span> The current rate of progress predicts the completion date and costs of the project.</div><div><span style="color: #800180;"><b> * Integrate the processes to project characteristics:</b></span> When there is uncertainty, the team will explore the proposed technological approach will work and mitigate the risk.</div><div> <b><span style="color: #800180;">* Update the Plan Based on Priorities:</span></b> Business set for the project and this will reflect in backlog priorities created by product owner with his development team.</div><div><span style="color: #800180;"><b> * Estimation that accounts for risk, distractions, and team availability: </b></span>In order to produce a better estimate, start with the historical averages and factor into the team's availability, distractions, and other calls. </div><div><span style="color: #800180;"><b>* Estimate Range reflects the level of uncertainty in the estimate: </b></span>Estimate Range helps to assess the risk associated with our project.</div><div><span style="color: #800180;"><b> * Base projections on completion rates:</b></span> The projections should be based on the actual compilation date of the project. Because these numbers show the real rather than ideal rate of progress.</div><div><span style="color: #800180;"><b> * Focus on the project, divergence and outside work:</b></span> The various factors to be considered in the agile projects like people supporting the agile projects, and bother planned and unplanned actions etc.,</div><div><br /></div><div> <b><span style="color: #800180;">Agile Estimation:</span> </b>It has heuristics and parametric estimates. The heuristics approach looking at data from projects that created similar products of the same size and complexity and sees how long projects took. In the parametric estimates, such as bottom-up estimates are based on the complexity of user stories. The estimate convergence graph shows the software projects move from a broad range in the early software life cycle to a more manageable range. Once the scope and specification are agreed upon and continue to narrow to deals more about the project. The product features are broken down into smaller units are called user stories. Then, breakdown the user stories into smaller chunks are called tasks. The epics are larger part of user stories. The epics might be positioned in 3 different hierarchies such as</div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi0GxCCJ26TshT0KrWS4OLSSMP23mnl2XW2Jk_Q5M5gKnVrJ40M7bGEo5VjpZJQseVEjWADhXwnuTAsyS0iqUMqcZlSbQKkyfxjguHMuGaKbgMAZKOBQ1FMwCNEEVIfPBXy9aWZ1-7d-4g/s1650/F88EE70F-D953-4C7D-9D44-BB95A66DD398.jpeg" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="611" data-original-width="1650" height="193" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi0GxCCJ26TshT0KrWS4OLSSMP23mnl2XW2Jk_Q5M5gKnVrJ40M7bGEo5VjpZJQseVEjWADhXwnuTAsyS0iqUMqcZlSbQKkyfxjguHMuGaKbgMAZKOBQ1FMwCNEEVIfPBXy9aWZ1-7d-4g/w600-h193/F88EE70F-D953-4C7D-9D44-BB95A66DD398.jpeg" width="600" /></a></div><div><br /><div class="separator" style="clear: both; text-align: center;"><br /></div></div><div>The project requirements are broken down as Just in Time or at the least responsible moment so that the team gets closer to do the work.</div><div> <span style="color: #800180;"><b>Agile Estimation Techniques:</b></span></div><div><b><span style="color: #800180;"> A. Affinity Estimating:</span></b> If a project has just started and has the backlog hasn't been estimated yet on in preparation for release planning, the agile team uses this technique and estimates the user stories in story points. This has been done in 5 steps,</div><div><b><span style="color: #800180;"> 1. Silent Relative sizing: </span></b>In this technique, two posted notes of Smaller and Larger written are placed on the wall or whiteboard. Team members are expected to size the item relative to other items on the wall considering the effort involved in implementing the definition of done. Run the silent relative sizing until all the product backlog items are up on the wall of space reserved for questionable items.</div><div><b><span style="color: #800180;"> 2. Editing of the wall:</span> </b>once the items are stacked on the wall, it is time to edit the relative sizing on the wall and ask the team members to read the product backlog items on the wall and move them around as needed in either direction. During this exercise, you may see some design discussions going on, the missing backlog items, and increased clarification needed from the product owner.</div><div><span style="color: #800180;"><b> 3. Place an item into Relative Sizing:</b></span> The team places the items along the spectrum at the top of the wall between smaller and larger. If you are using T-shirt sizing, you may look something like S,M, L, XL, XXL, and you need to put the user stories in one of those buckets.</div><div> <span style="color: #800180;"><b>4. Mark the Items by Color:</b> </span>Product owner to mark the item with color for estimating the work they have done so far. </div><div><b><span style="color: #800180;">5. Get into your Tool:</span></b> Finally, make sure that the estimates are getting into the tool of choice of product backlog management like Jira or other tools.</div><div><b><span style="color: #800180;">B. WideBand Delphi: </span></b>It is a popular project management and structured communication technique that relies on a panel of experts developed as Systematic and interactive forecasting. The team comprises of 3-7 member that consists of an agile coach, moderator, experts and developers. There are many rounds. The facilitator provides an anonymous summary of experts forecasts from the previous round with reason for judgments. Experts are encouraged to revise their earlier answers in accordance with other members of the panel so that the group will converge towards the correct answer. Finally, the process will be stopped with pre-defined criteria with no. of rounds, consensus, and stability of results. The WideBandDelphi will be conducted in six Steps. At the end, the product owner collects all the estimations from the team member.</div><div> <b><span style="color: #800180;"> C. Planning Poker:</span></b> It is one of the most commonly used techniques in agile projects that the team knows which stories are the highest priority by looking at the backlog so that they commit to as many high-priority stories in each iteration. It will be used to figure out the effort to build the prioritize list of user stories. It is usually an estimate of the story points needed to build each story at the start of each iteration. The team will conduct subsequent estimating and planning the sessions once per sprint. It is very helpful tool for an initial estimate to find out the product roadmap of the project and will be conducted after the daily stand up meeting. Planning Poker is a collaborative approach and each team member estimates the whole effort that helps everyone to get a better understanding of the whole project. The voting for an item continues all the team members anonymously vote the same way. If more than one person has the lowest or highest vote, just one person shares the reason to help the process move quickly. </div><div><b><span style="color: #800180;"> D. Bucket System:</span></b> This method used to do an estimation of a large number of items with a small to a medium-sized group of people and to do it quickly. Everyone in a group participates and hundreds of items can be estimated in a little time. It encourages the group accountability to estimate the effort of estimate in the value of the stakeholder. </div><div><b><span style="color: #800180;"> E. Velocity:</span> </b>This is a powerful technique for accurately measuring the rate at which agile development teams consistently deliver business value. To calculate the velocity of your team, simply add up the estimates of the feature, user stories, backlog items successfully delivered in an iteration. It is measured in the same units as feature estimates of story points, days, or hours. For estimating the initial velocity of your agile team, you should use proven, historical measures for planning features. Within a short time, velocity stabilizes and provides a tremendous basis for improving the accuracy and reliability for short and long-term planning for agile projects.</div><div><br /></div><div><br /></div><div><br /></div><div><br /></div><div><br /></div><div><br /></div><div><br /></div><div><br /></div><div><br /></div><div><br /></div><div><br /></div><div><br /></div><div><br /></div>Balamurugan Dhttp://www.blogger.com/profile/13078851936106035408noreply@blogger.com0tag:blogger.com,1999:blog-2943630978943594838.post-53351075117944650882021-01-22T07:43:00.024-08:002021-01-31T07:02:30.596-08:00How to integrate AI, Machine Learning, Deep Learning Technique in our Business?<p><br /></p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg-eQmPrOvddNkjVVvyYxVixLWcfFh5arHpsTVDiQJAvd-zCgpDfLxADy6fzMXg6NC4UFNaSMpxN1QzTlwISlstW8_-GvqspeubR-CSgLy1UDjgy5Suycw1S4WXRfJ7vZK-pzpv_NJAmMc/s1173/ai.jpg" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="600" data-original-width="1173" height="406" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg-eQmPrOvddNkjVVvyYxVixLWcfFh5arHpsTVDiQJAvd-zCgpDfLxADy6fzMXg6NC4UFNaSMpxN1QzTlwISlstW8_-GvqspeubR-CSgLy1UDjgy5Suycw1S4WXRfJ7vZK-pzpv_NJAmMc/w630-h406/ai.jpg" width="630" /></a></div><div class="separator" style="clear: both; text-align: center;"><br /></div><p></p><p> AI helps to create an innovative new product, boosting revenue, cutting costs, and improve efficiency. The challenges that many organizations face today are, to train deep learning technique and deploy into applications. Data science is a broad term that includes products, programming, problem-solving to extract information from data etc., Data analytics, Machine learning, and Deep learning are widely used for solving problems in data science. Data analytics extracting the insights from big data. Machine Learning is a statistical technique to construct the model from the examples in the data. It is generally a classifier, feature extractor, linear regression that helps to solve data science problems. Users train large amounts of data for complex neural networks in GPU accelerated data science. Deep Learning Technique capable of human-level accuracy for many tasks that require a tremendous amount of computational power. There are many use cases in every industry like health care, financial service, retail, telecom for GPU accelerated data science. For ex. In health care uses accelerated data science to better predict diseases and the best treatments for a wide range of health conditions. Enterprises use accelerated data science to analyze customer data for product development, monitor IT systems to implement the records that will be the deeper insights for decision-makers. The quality and quantity of data will help the data science approach that helps best for the problem to solve.</p><p><span style="color: #ffa400;"><b>Machine Learning:</b></span> Machine Learning is a part of AI where intelligence is gathered from the experience of past data. Machine Learning may not be based on a statistical model but it provides some mathematical model to analyze the data. Machine Learning doing data mining to achieve a particular goal and it is part of data mining. Machine Learning is programming the computer to optimize the performance criteria based on past data. It means that the machine will improve its effectiveness in performing the task. Humans observe the actions, outcomes and keep them in memory. This learning turns into a skill for a human. Similarly, when we give the past data to our machine, the computer will learn from the data and start performing better. There are many use cases in the industry. For ex, google is able to predict what we want to search based on our own search history and numerous other predictors.</p><p> When we enter into the world of machine learning based on statistics, you can find the relationship between input and output variables. For ex, the real estate agent wants to predict the price of a particular property. Here, the output variable is the price of the property and the agent has to decide which factor affecting the price of the property like area covered, no. of bedrooms, proximity to landmark or market etc., Mathematically, the agent wants to establish price as a function of these variables like,</p><p> Y = f(X1,X2,X3,X4....)</p><p> There are 2 motivations of estimating the function. Those are,</p><p>1. Prediction </p><p>2. Inference </p><p>Prediction means that we are just interested in getting the value of Y and not in the relationship of Y with each individual variable. The inference will establish the relationship between each input variable and output so that we know how the output will change when we manipulate input. Instead of a real-estate agent, the builder likes to know whether the building is near a market or school which gives a better price. So, this person's motivation would be inference and not just prediction. Whether to predict or infer, we need the model for analysis. There are 2 major classifications of models. Those are,</p><p>1. Parametric Approach </p><p>2. Non-Parametric Approach </p><p>In the parametric model, there is a functional form of relationship between input and output variables. In the example, the predictive values of price are close to the actual values. But, non-parametric method, we do not assign a functional form to this relationship, but the functional form is estimated by the model which is very complex. Once you know the motivation, we can choose the type of model. There are 2 types of learning based on the data we have,</p><p>1. Supervised Learning </p><p>2. Unsupervised Learning </p><p>When your data has a particular output variable and one or more input variables, our model will learn from this data value of input and output. This type of learning is called supervised learning. However, if you do not have any output variable but just have a set of input variables. This model is learning the relationship between these variables. It is called unsupervised learning.</p><p><b><span style="color: #ffa400;"> DeepLearning:</span></b> Deep learning maps data samples from an input domain to an output domain. Input domains are text data, images, audio, and video streams. The output domain is determined by a business question that the problem to solve and a deep learning task to perform. For ex. If your business question requesting a yes or no answer (text data), then the deep learning task is detection. If the question requires an answer to what type of thing is it, the DL task is classification. If the question requires the shape or volume is an answer, then the DL task is segmentation. So, depends on the application, you may use a combination of tasks to achieve a more sophisticated output. For language translation, you could use speech detection in the form of classification followed by translating text from one language to other languages in the form of a prediction. Different deep learning tasks require different models like, </p><p><span style="color: #ffa400;">1. Convolutional Neural Network (CNN)</span> used for analyzing 2D/3D images, classification, segmentation, prediction, sequence analysis, and regression problems.</p><p><span style="color: #ffa400;">2. Recurrent Neural Network (RNN)</span> used for natural language processing and works well in sequences such as sentiment analysis, speech recognition, machine translation etc.,</p><p><span style="color: #ffa400;">3. Generative Adversarial Networks(GAN)</span>generate images or results that are realistic. This technique is used to create images, automatically constructing pre-models, and perform image with a resolution that generates many answers to the same question. </p><p><span style="color: #ffa400;">4. Reinforcement Learning</span> is used the ideal behavior in a specific context like optimizing resource management to create a computer cluster, teaching robots to perform complex tasks, personalized recommendation etc.,</p><p><b style="color: #ffa400;">Fundamentals of Computer Vision: </b>Computer Vision is making things from pixel data in static images to photographs or moving images in the videos where the images are made up of pixels and picture elements. Each pixel made up of channels. There are 3 channels for color images and one channel for greyscale images. Each channel has a role from o to 255 denoting the strength of that channel. For color images, we have the channel representing red, blue, and green color. By the various combination values of these 3 channel elements, we can produce a variety of colors and produce the photograph.</p><p> Using the Simple Neural Network which is a fully connected feed-forward neural network to classify the iris dataset based on 4 attributes like petal height, petal width, sepal height, sepal width. These are propagated to a simple neural network which gives 3 outputs that were injected into a classifier and comes out with probabilities for 3 classes(iris Setosa, iris Versicolour, iris Virginica). Instead of input attributes, if we inject the image, you need to convert a one-dimensional vector. Also, you need to increase the no. of a parameter that occupies more memory space and needs to be trained in the model. These challenges can be mitigated by a convolutional neural network (CNN). It takes an image to the network that has 2 main elements like convolution and pulling which is combined to a variety of CNN architecture. The fully connected layer at the end of the network is responsible for converting the grid into vectors which are essential features of images. Hence CNN is called a feature extractor.</p><p><span style="color: #ffa400;"><b>Basics of DL App Development: </b></span>These are the steps to be followed in deep learning application development,</p><p><span style="color: #ffa400;">Step1:</span> Collect the samples for the training data-set so that the deep neural network will learn. AI Projects aren't successful without the right data. This data needs to be prepped with purpose. The data you choose to train directly affect the quality of the resulting model. You may need several thousand images to classify a wider environment to be used.</p><span style="color: #ffa400;"> Step2:</span> Select a deep learning network model. It is an untrained neural network designed to perform <div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgxZiiwp_9WZ_2u2NlUTy5YSsqNCm6Q51ftHQb1l9hW5pxsz1ZyaiVt9eFt3VqDDV3rD8EkUG9DtbLoSQuTP7xxrL45wYjFGkNZ8bpHxQqC21hMTJYEphhsK1aT2TkVNupDmK7pKWkel_Q/s336/BEEC8A2F-550A-4449-85A6-EB0844D982A1.png" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" data-original-height="150" data-original-width="336" height="244" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgxZiiwp_9WZ_2u2NlUTy5YSsqNCm6Q51ftHQb1l9hW5pxsz1ZyaiVt9eFt3VqDDV3rD8EkUG9DtbLoSQuTP7xxrL45wYjFGkNZ8bpHxQqC21hMTJYEphhsK1aT2TkVNupDmK7pKWkel_Q/w320-h244/BEEC8A2F-550A-4449-85A6-EB0844D982A1.png" width="320" /></a></div>general task detection, classification, and segmentation. If you see the neural network model, there are input layer (node), and output layers (node). In between, there are hidden layers(node). The design of a neural network should be suitable for a particular task. It is necessary to modify the model for a high level of accuracy for the particular dataset. The algorithms perform each node and connection in the nodes. It requires large computation that will be done parallel Which makes DL is an ideal for GPU accelerated for the accuracy of a particular dataset. <p></p><p><span style="color: #ffa400;">Step3:</span> Choose a deep learning framework like tensor flow, PyTorch, imagenet which is used to train the dataset in a neural network. Each image processes the neural network and each node in the output layer indicates the number that shows how competent the image. For ex. to classify the image of tea or coffee, the model needs two output layers (one layer for tea, another layer for coffee) and the result is a competent factor. The deep learning framework looks at the label image to determine the correct answer. If it determines correctly, the framework strengthens the connections that contributed weight and if it infers the incorrect results, the framework reduces the weight that contributed to getting the wrong answer. After processing the entire training dataset, the neural network will have the experience to infer the correct answer. But, it requires additional epochs to process the dataset to achieve higher accuracy. </p><p><span style="color: #ffa400;">Step4:</span> Now, the model has been trained in the large representative dataset. But, it is a low confidence number to classify the item. So, you need to modify the design topology of the model and retrain the model with the new dataset. </p><p><span style="color: #ffa400;">Step5:</span> Once the model has been trained, you can optimize the run time performance by using layers, memory, communication overhead, and tuning nodes etc., The fully trained model is ready to be integrated into the application. So, the image can quickly infer the correct answer based on the training. This application can be deployed into cloud, workstation, robot, or self-driving car etc.,</p><p> In addition to ai application, you need to ensure your organization supports the implementation consideration for workloads like deep learning training, deep learning inference, and machine learning. So, all stakeholders must be on-board with appropriate ai infrastructure to support modern ai workloads.</p><p><b><span style="color: #ffa400;">Transfer Learning:</span></b> There are models developed and trained the skill sets which saves the amount of team time and leverages the ai model. The Pre-Trained model provides specific skills from recognizing the images in video feed to detecting words in audio recordings for NLP. These nodes are trained by generally available data which can be adapted to specific use cases to improve the performance in our environment. It enables the team to fine-tune the specific needs. </p><p> Industry SDK provides tools and software for developers to build, test on deploy the ai solutions and integrate with applications to their industry. For ex. In healthcare, the medical institutions use the pre-trained model to classify MRI scans. With transfer learning, the Models identify new diseases or already existing diseases by fine-tuning the data available..</p><p><br /></p><p><br /></p>Balamurugan Dhttp://www.blogger.com/profile/13078851936106035408noreply@blogger.com1tag:blogger.com,1999:blog-2943630978943594838.post-74666755960990797702021-01-01T02:16:00.002-08:002021-01-01T02:16:51.603-08:00Wish You A Very Happy New Year'2021<p></p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjdc7Zm2Z6U5wCdxR6jErf1Q-vZCPUkRXMhSrxtJPHBHR1ev7hMSHSSXk0fy6S8sjOgtWmpeTJLr-80eScgTc26O3Ukoo3cuT7AoI0EhV2TkVBPkdEuvR0eFkEMII0pZmhEUzW3KQPyyc8/s2048/Newyear-greet.jpg" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="2048" data-original-width="2048" height="276" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjdc7Zm2Z6U5wCdxR6jErf1Q-vZCPUkRXMhSrxtJPHBHR1ev7hMSHSSXk0fy6S8sjOgtWmpeTJLr-80eScgTc26O3Ukoo3cuT7AoI0EhV2TkVBPkdEuvR0eFkEMII0pZmhEUzW3KQPyyc8/w612-h276/Newyear-greet.jpg" width="612" /></a></div><br /> <p></p>Balamurugan Dhttp://www.blogger.com/profile/13078851936106035408noreply@blogger.com0tag:blogger.com,1999:blog-2943630978943594838.post-80613100993175228772020-12-28T04:22:00.009-08:002021-01-03T17:44:01.373-08:00How to improve the engagement rate of our product in online/ eCommerce website?<p></p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiu4akXPEHTEr9dV5YUg8iuWyU_HVGGiT8ZRaTFK2pB5oFee_s7ttMgJgPkMQUhhmg0ypxOoELUTZhFk-sgFF-DgNZgWmdqel2bPb12DNuBl869TEyq8m25xzSttb9wazMGK7AsiaxQFjU/s625/customer.jpg" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="417" data-original-width="625" height="369" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiu4akXPEHTEr9dV5YUg8iuWyU_HVGGiT8ZRaTFK2pB5oFee_s7ttMgJgPkMQUhhmg0ypxOoELUTZhFk-sgFF-DgNZgWmdqel2bPb12DNuBl869TEyq8m25xzSttb9wazMGK7AsiaxQFjU/w601-h369/customer.jpg" width="601" /></a></div><br /><span style="color: #ffa400;"><br /></span><p></p><p><span style="color: #ffa400;"> UX / UI Design:</span> Great B2B UX makes users and developers happy. For a B2B application to be a successful user experience, it needs to fulfill the requirements from different angles like when we want the app to be useful and usable that needs to provide business value. So, apart from the user requirement, you need to make sure that stakeholders know how to use the product by defining buyer personas and usage scenarios. The challenge to high feasibility UX designs are,</p><p>1. Fulfilling the User Needs</p><p>2. Achieve Business Goal</p><p>3. Technological Constraints</p><p> Good design not only provides a good user experience but saves money in the organization. It can be characterized as, </p><p><span style="color: #ffa400;">1. Useful</span> - by offering the right functionality through the app</p><p><span style="color: #ffa400;">2. Usable</span> - by performing the desired task with application</p><p><span style="color: #ffa400;">3. Findable</span> - able to find what they looking for</p><p><span style="color: #ffa400;">4. Value </span>- means create value for the business</p><p><span style="color: #ffa400;">5. Credibility</span> - the design should be willing to trust and believe </p><p><span style="color: #ffa400;">6. Desirability</span> - application should motivate the user for the desired task</p><p> This can be achieved by bridging the feedback gap. It includes writing user personas, usage scenarios, usability tests, surveys etc., Also, you should not forget to whom you are building the application like a customer, employee, colleague, suppliers, viewers. A persona is a realistic character representing a certain segment of the target group which creates efficiency and focuses on the needs of real users. In order to create a persona, you need to find their goals, priorities, behavior, etc., The usage scenario is a story in which the user of the application playing a leading part. It explains the context of use, what the user wants to achieve, considers, influences, and expects feedback. Every major design decision is motivated by the usage scenario. This can be added as an agile backlog in the product development lifecycle. In the requirement phase, there is more room for improvement that can be added to the prototype. In the agile development phase, the features are implemented on the fly. Finally, In the deployment, the user experience is rolled out. </p><p> <span style="color: #ffa400;">Content Marketing</span> is an approachable and scalable means of growth. There is a framework for massive organic growth. The building blocks of content marketing strategy are,</p><p>1. Goals</p><p>2. Research</p><p>3. Authority</p><p>4. Promotion</p><p><b><span style="color: #ffa400;">Goals: </span></b> Ultimately, it is winning google. It means the magic piece of your content that yields higher domain authority, higher conversions, and more traffic. However, all the content can not achieve all those things at the same time, but we need to find a magic piece of content to achieve this objective. So. We need to create a specific type of content to achieve this goal like </p><p>1. Funnel content,</p><p>2. How-to content, </p><p>3. Actionable content that generates higher conversion,</p><p>4. Inspirational content for more traffic, </p><p>5. Viral/ Editorial content generates higher domain authority</p><p><b><span style="color: #ffa400;">Research:</span></b> It is the strategy on which the keywords to rank for. The creation process should start with Keyword Research. It includes a high converting landing page, blog posts, and other gated content. The content organization is broken down into categories and subcategories which have more long-tail search queries that you can use in your content. So, create your chart that will better understand the theme and depth of the topics you write about for the right audience.</p><p><b><span style="color: #ffa400;">Authority:</span></b> This brings the content authority. The content cluster establishing authority on google and the audience. For ex., to build the authority of the term " web design", you may need to create a piece of viral, inspirational, and how-to content. Ultimately, the content will support authority. It is good to create infographics.</p><p><b><span style="color: #ffa400;">Promotions:</span></b> once you created the content, the next thing is to bring the content to the right people. You may need at least one blog post for more keywords to promote the content. Depends on the goals, the process will be varied. For ex. If you want high DA, our goal is to have high-quality backlinks and focus on promoting press for higher conversion and more traffic. You may need press mentions. For that, you need to write similar niches and anchor links.</p><p><b><span style="color: #ffa400;">User onboarding:</span></b> It means bringing the users from point A(specific pain point) to point B, showing the solution that helps to improve the status quo. The onboarding occurs before they get in touch with solutions or land on the website. It starts with educational content, promotional messages etc., It is the first step journey that leads to success. Each step is an opportunity for your customer to improve their current situation with know-how information. Effective onboarding brings prospects to the desired outcomes as quickly as possible. It is time-to-value. The sooner you prove that your application provides value, the more engaged and active users will be.</p><p> The onboarding phase will create permanent associations with your brand and product for the entire customer journey. The first impression of your software is an important component of a customer's success. So, you are the one who drives adoption by the internet with key features of the app at the right moment. The trial users looking at the following things,</p><p>1. The user signs up and moves on to the actual interface. He needs to understand what the product is designed for. What good actually to achieve with, and for what benefits the solution is presented.</p><p>2. The user understands the specific solution and why it is worth testing and exploring.</p><p>3. The user takes full-control over initial steps and tries to achieve quick wins. So, you need to eliminate the confusion that meets customer's expectations.</p><p>4 The user knowing the key features, the workflow, and integrate with other solutions as well.</p><p>Pursuing these goals increase the chance of trial users into paying customers. The user can able to access the service, software elements in user interface, and feel compelled to work with your application. You can make the product to be successful by knowing the customer's objective and enable them to reach it effortlessly. The user can have a solid understanding of a product to use its fullest potential. Product Manager should focus on the optimal customer journey and move on to designing it. The Road-Map to long-term relationships with your customers include the following things,</p><p>a. A smooth CX ( customer experiences ) throughout the entire customer journey by touchpoint</p><p>b. A great UX and thorough onboarding eliminates frustration, delay, and engagement</p><p>c. A great brand and awesome road-map with flawless customer-support</p><p>So, when you onboard your user, you need to make sure that you're addressing with a solution, the benefit of your product, and how to approach the issue etc., Instructional videos and generic Drip campaigns will improve the onboarding experience.</p><p><b><span style="color: #ffa400;">B2B Client Experience:</span></b> B2B buying process has changed dramatically and sales in a traditional way are no longer effective. B2B Merchants looks for new ways to serve the client to digital orders. Sales become more complex, and increasingly over context-driven. B2B buyers are eager to research and determine a vendor that meets their needs. Clients want a seamless, consistent, flexible buying experience. So, your success depends on how you keep pace with their desires. With all the channels and touchpoints, there is a blend of self-service and assisted sales approaches are emerging as a solution. Understanding current B2B buyer profiles and their buying practices will help your business create the right customer experience and close more sales. Basically, they are demographically and geographically diverse and get up in digitally-enabled environments.</p><p> B2B buyers are driven by rational, objective factors in making purchase decisions and they actually made up teams - different roles, different objectives, and perspectives. The ultimate objective for the seller company is to remove the pain points through the sales process. At the same time, the subjective and emotional factors will contribute to the final decision.</p><p><b><span style="color: #ffa400;">eCommerce Subscription:</span></b> Businesses selling contracts that rely on subscription billing platforms. This subscription platform employing your business must-have, a core feature set, an option to serve customers globally, automatic invoicing, and refund capabilities etc.,. A good subscription service platform support end-to-end retention processes and capabilities to fight churn. </p><p> B2B subscription will depend on your business size and needs. Small businesses primarily need to support for acquisitions, and they’re attracting the first critical pool of subscribers. Medium-sized business focusing more on retention. They develop a relationship with acquired customers and keep users engaged. Large companies need a recurring billing platform to support new markets or automate their internal process. So, you need to understand how often and what reasons your customer are using your product that enables you to get a recurring payment platform. Also, the reliable payment gateway for recurring billing is important to any subscription business. There are various product pricing methods that are considered for online products like,</p><p><b><span style="color: #ffa400;">1. Cost-based pricing:</span></b> It is to identify how many units need to be sold to meet your expenses. This is good for establishing a baseline price.</p><p><b><span style="color: #ffa400;">2. Value-based pricing:</span></b> You should understand the value that customers place on it and how they compare the product to competing products. The rule is that the customer should get a perceived value of 10x what they paid for a product or service.</p><p><b><span style="color: #ffa400;">3.Market-Oriented Pricing:</span></b> It is based on your industry, trends, competitor price etc., The quality of your product is a large factor in using the product. The customer weigh the quality of your product against a competitor's product. People willing to pay more for good value. So, the price should leave an impression on would-be buyers.</p><p><b><span style="color: #ffa400;">4. Dynamic Pricing:</span></b> Price change based on market demand weekly or daily. Raising and lowering price based on demand can help you hit more people. The tools like <a href="https://www.quicklizard.com/">quicklizard</a> , <a href="https://www.omniaretail.com/">omniaretail</a> will help companies for dynamic pricing.</p><p><b><span style="color: #ffa400;">5. Loss-leader pricing:</span></b> It is essentially a form of marketing. This is a type of pricing to attract buyers to their store and website. This form of pricing can be done by bundling low-cost items with higher cost items. So, this method requires a lot of knowledge about costs and margins.</p><p><b><span style="color: #ffa400;">6. Market Penetration Pricing:</span></b> It is to start low to capture market share and rise the price to build on profitability. The main goal is to spark the curiosity of your prospects and entice them to become customers. But, it has risk that the customer may to choose to leave, once the price increases.</p><p><b><span style="color: #ffa400;">7. Anchor Pricing:</span></b> It is for people who want to hunt for a deal. This is when a company makes regular price visible to buyers and regularly sells it at a discount.</p>Balamurugan Dhttp://www.blogger.com/profile/13078851936106035408noreply@blogger.com0tag:blogger.com,1999:blog-2943630978943594838.post-167635522653355502020-11-29T08:22:00.000-08:002020-11-29T08:22:01.608-08:00monday.com - Team to Bookout Desk<iframe style="background-image:url(https://i.ytimg.com/vi/z1bQ9KjK0dY/hqdefault.jpg)" width="480" height="270" src="https://www.youtube.com/embed/z1bQ9KjK0dY" frameborder="0"></iframe>Balamurugan Dhttp://www.blogger.com/profile/13078851936106035408noreply@blogger.com0tag:blogger.com,1999:blog-2943630978943594838.post-53508551551193566592020-11-29T03:42:00.010-08:002021-07-01T18:40:26.533-07:00What are the basics in React, Firebase Real-Time Database and GraphQL for Application Development? <div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiGEoTRglwY9MJ0MnvUdocSihOEpv8MLvp7l8LLGfHo9fg53UClkeqyDv2ONTDN4nivCeEZu3YblWQK4UTC27zkghOkAiTm0xVi4xwOwnw3irGA-auu-qewo15CTekpwGV8Ib77MLT4_tM/s1095/Reag-firebase.jpg" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="401" data-original-width="1095" height="287" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiGEoTRglwY9MJ0MnvUdocSihOEpv8MLvp7l8LLGfHo9fg53UClkeqyDv2ONTDN4nivCeEZu3YblWQK4UTC27zkghOkAiTm0xVi4xwOwnw3irGA-auu-qewo15CTekpwGV8Ib77MLT4_tM/w652-h287/Reag-firebase.jpg" width="652" /></a></div><br /><div><br /></div><div> React has several changes since its release in 2013. The React16 has improved features of async rendering, return an array of elements, and better error handling. For ex, we use the react-dom to render an array called Test. First, we will define the Test component(<Test /> and it should be mounted in the root element (document.getElementbyId('root')). Then, create the function component(Test) that simply returns an array. This array can have different items. It can be written as,<div><br /><div> const Test = () =>{</div><div><span> </span><span> return [</span></div><div><span><span> </span><span> </span><span> </span><span> <li key=1>Hi </li>,</span><br /></span></div><div><span><span><span> </span><span> </span><span> </span><span> <li key=2>How are you </li></span><br /></span></span></div><div><span> ]</span></div><div><span> </span>}<div><br /></div><div>It will render the array items in the browser. For rendering the <span style="color: #cc0000;">dynamic data</span>, create the simple array called data and have the items in it. Now, instead of rendering the flat array, just map over the data. So, first, pass the data to the component which can accessible to the Test component in props or destructure as {data}. Now, we need to replace the return with another return which will map over the data in props. This will return an item for each Test data.</div></div></div><div><br /></div><div> const data = [</div><div> { Test: "Hi", id: 1 },</div><div> { Test: "How Are you?", id: 2 }</div><div> </div><div> ]</div><div> const Test = ({data}) => {</div><div> return data.map(e => {</div><div> return (</div><div> <li key={e.id}>{e.Test}</li></div><div> )</div><div> })</div><div> }</div><div><br /></div><div> ReactDOM.render(</div><div> <Test data={data}/>,</div><div> document.getElementById('root')</div><div> )</div><div><br /></div><div><div><span style="color: #ffa400;"><b>Components:</b> </span> Components allows to share a state. There are 3 types of components. Those are,</div><div><br /></div><div>1. Compound Components</div><div>2. Uncontrolled Components</div><div>3. Controlled Components</div><div><br /></div><div><span style="color: #ffa400;"> Compound Component</span> shares the state information between the component and children. The React <a href="https://reactjs.org/docs/react-api.html">Top-Level API</a> will help you to learn more about creating react childrens or elements. It is useful to display something based on the true or false of the variable. In the <span style="color: #ffa400;">Uncontrolled component</span>, you can use DOM elements to build your form and capture the value of inputs by refs. In<span style="color: #cc0000;"> </span><span style="color: #ffa400;">controlled components</span>, the inputs accept a value as a prompt and use a callback to change a value. Here, we are not using the ref, but the data push to the form component. It is always the current value of the input without asking for a reference. It makes the state and input synchronization that makes your validation simpler which makes our data accessible via the component state. </div></div><div> There are several life cycle methods in the life span of components. The most used method of component is render. It is called when the component is first mounted. It is also called when new props come in, Set State is called and data changes. The mounting life cycle includes the following,</div><div><b><span style="color: #ffa400;">1. Constructor</span></b> - It is called before the component is mounted and used to initialize the state.</div><div><span style="color: #ffa400;"><b>2. GetDerivedStateFromProps</b></span> is invoked after the component is constructed and fires when the component receives new props.</div><div><b><span style="color: #ffa400;">3. ComponentDidMount</span></b> is invoked after the component is mounted. It is good to call when you load data or make network requests. Then, <span style="color: #ffa400;">shouldComponentUpdate</span> invoked if this returns false, render and componentDidUpdate won’t be called. </div><div><span style="color: #ffa400;"><b>4. getSnapshotBeforeUpdate</b></span> called right before the recently rendered output. It is used to read the DOM to capture the value.</div><div>5. Finally, <span style="color: #ffa400;"><b>componentDidUpdate</b></span> invoked after the update and update have occurred.</div><div><br /></div><div> All of these do changes to react in handling asynchronous information. In react applications, we create nested component trees to compose user interfaces. There will be a state in every component to get some data. But, having the state in all our app made it difficult to keep values in sync. So, instead of putting state everywhere, we put the state at the root and pass the state data component via props. This will a create parent and child component.</div><div><br /></div><div><b><span style="color: #ffa400;">setState() Functionality:</span></b></div><div> We can change the data in our application using the setState. For ex, create the booking component responsible for rendering the bookings and make this a class component. This class component has a render method and set up an initial state. So, we will add the constructor and call the props, then add super and call props. Now, Initialize "this.state" with value, and this render return just the <div> at this moment. Now, add another component responsible for rendering the h1, and this stateless functional component displays props.room. This component will be rendered in Booking. The NowStaying method takes the property of room and set the value by "this.state.room" passing that value to the tree to NowStaying Component. Now, if you run the code in the browser, you can see "Deluxe Room". Next, we are going to add a method called addKitchen. This method set the state(this.setState({})). Usually, we pass it to an object that contains new information to overwrite whatever in the state such as "Deluxe Room with Kitchen". So, we have to trigger that function to a button next to our Now Staying Component. So, add the JSX expression that the "this.state.room" is Deluxe and we are going to render a button and an onclick that maps to this.addKitchen method. So, when you run this in the browser, you can see "Deluxe Room with Kitchen". The code should be,</div><div><br /></div><div><div><script type="text/babel"></div><div><br /></div><div> const NowStaying = props => <h1>{props.room}</h1></div><div><br /></div><div> class Booking extends React.Component {</div><div> constructor(props){</div><div> super(props)</div><div> this.state = {</div><div> room: "Deluxe Room"</div><div> }</div><div> this.addKitchen = this.addKitchen.bind(this);</div><div> }</div><div> addKitchen = () => {</div><div> this.setState(prevState => {</div><div> return { room: `${prevState.room} with Kitchen`}</div><div> })</div><div> }</div><div> render() {</div><div> return (</div><div> <div></div><div> <NowStaying room={this.state.room} /></div><div> {(this.state.room === "Deluxe Room")</div><div> ? <button onClick={this.addKitchen}>Add Kitchen</button></div><div> : null</div><div> }</div><div> </div></div><div> )</div><div> }</div><div> }</div><div><br /></div><div> ReactDOM.render(</div><div> <Booking />,</div><div> document.getElementById('root')</div><div> )</div><div><br /></div><div> </script></div></div><div><br /></div><div><br /></div><div><b><span style="color: #ffa400;">Fragments:</span></b> In React, the components returns multiple elements. It includes the collection of list items, rows for a table etc., By using the correct rendering rules, you can add ton of additional nodes to the DOM. for ex, the app returns the header and this header can returns nav elements. The nav elements can have as many links. These JSX elements must be wrapped in enclosing tag called the Fragements. It can be represented as,</div><div><br /></div><div>const Navelements = () => {</div><div><span> </span><span> <React.Fragment></span><br /></div><div><span><span> </span><span> </span><span> </span><span> <a href ="/"> Home </a></span><br /></span></div><div> <a href ="/about">About </a></div><div> <a href ="/services"> Services </a></div><div> <a href ="/contact"> Contact </a><span><span><span> </span><span> </span><span> </span><span> </span><span> </span><br /></span></span></div><div><span><span><span><span> </span><span> </React.Fragement></span><br /></span></span></span></div><div>}</div><div><br /></div><div>const App = () => {</div><div><header></div><div><span> <Nav></span><br /></div><div><span><span> </span><span> </span><span> <Navelements></span><br /></span></div><div> <Nav></div><div><span><span><span><span></header></span></span></span></span></div><div>}</div><div><br /></div><div>It allows you to wrap the header, nav, and navelement that return multiple elements. If we want to render items from an array, you need to know about skiing and snowboarding. Here, we will pass data to the component via props. ie, skiDictionary={skiDictionary}. Then, we will create the description list together with our component. The Description list is a HTML tag that describes a list of items. We are going to take the props and return a definition list DL that should be the mapping of props say props.skiDictionary.map. It can be written as,</div><div><br /></div><div><div><script type="text/babel"></div><div> const schedules = [</div><div> { id: 1, name: "Mark", division: "Admin Team" },</div><div> { id: 2, name: "Mike", division: "Development Team" }</div><div> ]</div><div><br /></div><div> const App = (props) => {</div><div> return (</div><div> <dl></div><div> {props.skiDictionary.map(term => (</div><div> <React.Fragment key={term.id}></div><div> <dt>{term.name}</dt></div><div> <dd>{term.division}</dd></div><div> </React.Fragment> </div><div> ))}</div><div> </dl></div><div> )</div><div> }</div><div><br /></div><div> ReactDOM.render(</div><div> <App skiDictionary={schedules}/>,</div><div> document.getElementById('root')</div><div> )</div><div> </script></div></div><div><b><span style="color: #ffa400;">GraphQL: </span></b> It is a data query and manipulation language for API for fulfilling queries with existing data. Based on the data model, GraphQL returns the data same format as you requested. It can be connected to any database or storage engine. GraphQL query looks like a JSON. For ex,</div><div><br /></div><div>query{</div><div> boards(){</div><div> id</div><div> name</div><div> subscribers{</div><div> name</div><div> photo_thumb</div><div> }</div><div> }</div><div>}</div><div><br /></div><div><span>The response for your query is,</span></div><div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEja8OXVXpe96-4N_KOxi_KsFNZzpEXYiWH0QpG8KHrSWMr4iwLXuSRCbIiYkxWf51jEHqwnT7kYgyEh7QZsJAZD4fAdByt2gz2o67IFqEcbGr7jAGgKpuYVdbNap9dI8e54WkKzQc2vVaE/s1300/monday.jpg" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="619" data-original-width="1300" height="354" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEja8OXVXpe96-4N_KOxi_KsFNZzpEXYiWH0QpG8KHrSWMr4iwLXuSRCbIiYkxWf51jEHqwnT7kYgyEh7QZsJAZD4fAdByt2gz2o67IFqEcbGr7jAGgKpuYVdbNap9dI8e54WkKzQc2vVaE/w640-h354/monday.jpg" width="640" /></a></div><br /><span><br /></span></div><div>Here, the shape of our request matches the shape of the response. The GraphQLfills in the values for the keys that were requested. The shape-matching is the powerful feature of GraphQL and another feature is the type system.</div><div><br /></div><div> In GraphQL, every field has a type associated and can be inspected in GraphiQL by clicking on it. If you click the query, you will see the available fields that you can query. The field account returns the Account and if there is an exclamation mark, that will never return null. The field asset is a function that takes an id of type int and returns an Asset but returns null if the Asset doesn't exist. All the field declarations are considered optional by default. The GraphQL schema with user-defined types User can be written as,</div><div><br /></div><div> type User {</div><div> login:String!</div><div> email: String</div><div> #more fields</div><div> }</div><div><br /></div><div> type Query {</div><div> user(login:String!):User</div><div> viewer: User</div><div> #more fields</div><div> }</div><div> </div><div> Here, the user has two fields: login and email, both of type String. The type Query reads the data with special importance. For ex, to query for the user of login of Balamurugan and request the field such as login, name and bio are,</div><div> query {</div><div> user(login: "Balamurugan" ) {</div><div><span> </span><span> </span><span> </span><span> </span><span> login</span><br /></div><div><span><span> </span><span> </span><span> </span><span> </span><span> name</span></span></div><div><span><span><span> </span><span> </span><span> </span><span> </span><span> bio</span></span></span></div><div><span><span><span> </span><span> </span><span> }</span></span></span></div><div><span><span><span> }</span></span></span></div><div>It returns as,</div><div><br /></div><div> {</div><div> "data": {</div><div><span> </span><span> </span><span> :user": {</span><br /></div><div><span> "login" : "Balamurugan",</span></div><div><span> "name" : "Balamurugan",</span></div><div><span> "bio" : "This is the test bio"</span></div><div><span> }</span></div><div><span><span> </span><span> }</span><br /></span></div><div><br /></div><div><span><span> }</span><br /></span></div><div>It is good to use the named query where the name goes right after the query keyword. Variables can be defined in query arguements such as "query getUser($login:String!)". This variable will be passed as a arguement of a user field. GraphQL enforces the correctness of inputs and returns an error if it doesn't satisfy the specification. It allows multiple queries to one and the fields in the result must match the fields in the query. So, craft the queries to return all data your application might need in one single request.</div><div><br /></div><div><div><b><span style="color: #ffa400;">Read and Write Data from Firebase:</span></b> <a href="https://felgo.com/doc/felgo-firebasedatabase/#getValue-method">Firebase database</a> allows user-specific data to the Firebase. It is synced across all clients in real-time and remains available offline. When the user goes online, the changes are synched to the Firebase cloud back end. Data in Firebase stored in JSON tree.</div><div> The private features of Firebase database work in a combination of FirebaseAuth and authorized users. The public values can be accessed without authentication depending on database rules. So, you can restrict access to the parts of the database with registered users. </div></div><div><br /></div><div><span style="color: #ffa400;"><b><span>Read data from Firebase:</span></b> </span>Firebase is a back-end service provider (BaaS) that provides database, authentication, and cloud storage services, Once you create an application on the Firebase console, and choose the Real-Time Databases. Then, set the security rules. Finally, configure the Firebase into your app by,</div><div><br /></div><div>import React from “react”;</div><div>Import Firebase from “Firebase”</div><div>import config from “./config”</div><div><br /></div><div>In the config.js, you can put your credentials that contain,</div><div>Const config = {</div><div>apiKey: “{your key}”</div><div>authDomain: “{your Key}”</div><div>database URL: “{your Key}” </div><div><br /></div><div>export default config </div><div><br /></div><div>Creating an application:</div><div> First, initialize the Firebase app in the constructor. </div><div><br /></div><div>class App extends React.Component {</div><div>constructor(props){</div><div> super(props)</div><div> Firebase.InitializeApp(config.Firebase);</div><div><br /></div><div> this.state = {</div><div> employees:[]</div><div>}</div><div>}</div><div>Then write the logic for getting and saving data. writeUserdata will write our state into the database, The getUserdata will create a listener on/path, and on-values changes, we will assign snapshot value as state.</div><div><br /></div><div>writeUserData = () = {</div><div>Firebase.database().ref(‘/‘).set.this(state);</div><div>console.log(‘Data Saved’);</div><div>}</div><div><br /></div><div>getUserData =() => {</div><div>Let ref = Firebase.database().ref(‘/‘);</div><div>ref.on(‘value’, snapshot => {</div><div>const state = snapshot.val();</div><div>this.setState(state);</div><div>});</div><div>console.log(“Data Retrieved”);</div><div>}</div><div><br /></div><div>Put these writeUserData into componentDidUpdate. Then put the getUserData into ComponentDidMount as, </div><div><br /></div><div> ComponentDidMount () {</div><div> this.getUserData();</div><div> }</div><div><br /></div><div> componentDidUpdate(preProps, prevState) {</div><div> if ( prevState !== this.state) {</div><div> <span> <span> this.writeUserData();</span></span></div><div> }</div><div> }</div><div>Now we will map our employee array from state and put each item in a card component. Each card have delete and update button. When the user clicks the delete button, we will filter out a specific item and delete. Similarly, when the update is clicked, we will get the item data into the form.</div></div>Balamurugan Dhttp://www.blogger.com/profile/13078851936106035408noreply@blogger.com0tag:blogger.com,1999:blog-2943630978943594838.post-4294296537304631452020-11-16T03:45:00.001-08:002020-11-16T03:45:59.966-08:00Mexa Wallet<iframe style="background-image:url(https://i.ytimg.com/vi/BAzBsovCIp0/hqdefault.jpg)" width="480" height="270" src="https://www.youtube.com/embed/BAzBsovCIp0" frameborder="0"></iframe>Balamurugan Dhttp://www.blogger.com/profile/13078851936106035408noreply@blogger.com0tag:blogger.com,1999:blog-2943630978943594838.post-61298470724656315842020-10-29T01:41:00.007-07:002020-11-11T04:42:50.066-08:00How to write Smart Contracts for Decentralized Application(DApps)?<p> </p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgMnG-Z-N3lmqx4Wx9GL5tQjbtfZK-zo9YE8YIkmy_EqJAowFQsRtCODh_25HayoCDu4bWYCNBIEM8ahIy8zTbMqN3DekKCi5cMY_gv0a8yc49oijAmWjdXa-2ZSrd5yiexS2FmAUbkmr0/s579/smart-contract.jpg" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="387" data-original-width="579" height="428" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgMnG-Z-N3lmqx4Wx9GL5tQjbtfZK-zo9YE8YIkmy_EqJAowFQsRtCODh_25HayoCDu4bWYCNBIEM8ahIy8zTbMqN3DekKCi5cMY_gv0a8yc49oijAmWjdXa-2ZSrd5yiexS2FmAUbkmr0/w640-h428/smart-contract.jpg" width="640" /></a></div><br /><p></p><p> <a href="https://solidity.readthedocs.io/en/v0.7.4/">Solidity</a> is the contract oriented, high-level language influenced by python, C++, and javascript for implementing smart contracts which are designed for the Ethereum Virtual Machine(EVM). Ethereum is the platform and solidity is the language to build DAPPs. It supports libraries, types, inheritance targeted to run at EVM. Solidity has many plugins and extensions for the editors such as Visual Studio, VSCode, Atom, and more. This will be used to build your own DAPPs which are suitable for the applications that automate the direct interaction between peers or facilitate the group of action across the network.</p><p> A smart contract is a code that handles the transaction between two parties without the need for third parties. You can add many rules as you wish in your smart contract and the code will handle your affairs. For ex, when you deposit a cryptocurrency into a smart contract, it will execute the contract once the conditions are met which puts full control of the transaction and automate it. Ethereum is a programmable blockchain that offers peer-to-peer transactions that are safe and proven across the network. Each node in the chain runs EVM where all your contracts are executed. EVM maintains consensus across the network and isolated from the file system or processes. You can install the solidity compiler using <a href="https://solidity.readthedocs.io/en/v0.7.4/">solidity document</a>, then find the Docker and make sure to install all the items needed based on your environment, binary packages, and compiler in your local environment. Feel free to install the <a href="https://solidity.readthedocs.io/en/v0.7.4/resources.html?highlight=Integrations#solidity-integrations">Integrations</a> for your editor like Sublime Text, Atom, Visual Studio etc., If you just want to work on the basics in solidity, the Remix can compile and run your solidity code.</p><p><b><span style="color: #ffa400;">Solidity Basics:</span><span style="color: #20124d;"> </span></b> First thing, you need is to import the solidity by doing the keyword pragma which means we load this only once and then the version number. Once the solidity file is imported, you can import any other file by import command, then set up your contract. Solidity contracts are similar to classes in an object-oriented language. The state variables, functions, events, struct types, and enum types are written inside the contract. So, declare the state variable that means the current status of the variable. First, declare the type and name the variable to store data. Ehtereum has an address data type that is really useful at all times. They usually called wallets or wallets addresses. The datatypes in solidity are,</p><p>1. The enum data type helps to define user types like buyer and seller.</p><p>2. Mappings are dictionaries in other languages that have key values in public or private.</p><p>3. The special variables in solidity are messages and transactions. Basically, the messages are what is sent. So, you can access the sender, value, and the particular message or data. It can be described as msg.sender, msg.value, and msg.data. </p><p>4. When the transaction is done, you can access the particular information by the variable called tx.origin. </p><p> The modifiers are the condition of the function or condition before we run the function. It has been declared as the modifier keyword and the name of the modifier. Then, set the conditional for the modifier by require statement that contains information about what does it require and use the syntax underscore, semicolon to close the modifier. The modifier usually needs a function that is similar to other languages such as function and function name with the type of parameters (public and private). The body of the function contains what is going to happen in the function. Basically, the modifier will check before it runs the function that condition is met, then it will execute the function. The last thing is the event. It is basically javascript events. Basically, the events expecting the address. When the events are initiated, it expects the things and will do something afterward.</p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgzhEnyNpch0XYCxCbrWPLoNnNgz8M6L1a0ewPk3JDG-elHkb9t2TJ9xSG7SUrp9Cq46dXblFeNgMpA7_dY9-47JXFHE45Wtm8y4YUDLptZbTgaoCv4rKHzhWSOLpFaez2pq8zxvxB1oL4/s506/program.jpg" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" data-original-height="506" data-original-width="506" height="294" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgzhEnyNpch0XYCxCbrWPLoNnNgz8M6L1a0ewPk3JDG-elHkb9t2TJ9xSG7SUrp9Cq46dXblFeNgMpA7_dY9-47JXFHE45Wtm8y4YUDLptZbTgaoCv4rKHzhWSOLpFaez2pq8zxvxB1oL4/w320-h294/program.jpg" width="320" /></a></div><p></p><b><span style="color: #ffa400;">Functions and Conditionals in Solidity:</span></b> The functions take input as parameters (uses underscore parameter) like most programming languages, but it differs to return multiple outputs. The keywords to use when you do your function is public or private. If you do a private function that will be accessible within the contract and a function will not be accessible outside of the contract. If you want a function that should be accessible outside of the function, make it to be public. You can use the word pure means interacting with constants that don't change and put the return statement with return type in the second line. Solidity has conditional statements of if, for loop, but the switch, goto is not available. The conditionals are very similar to other programming languages.<p></p><p><span style="color: #ffa400;">Writing a Smart Contract: </span> Once you know the basics of solidity, you can start writing the smart contract. Initially, we split up the project into 2 directories like Truffle and web. It is good to follow these steps when you write the code inside the contract,</p><p><span style="color: #f9cb9c;">Step 1: Setting up the initial variables:</span></p><p> Let's take an example of sending ether with the approval process by a third party. So, create the contract ApprovalContract, then create the variables, modifiers, and functions. Here, the variable type address will represent the sender, receiver, and for the approver, set it equal to value.</p><p><span style="color: #f9cb9c;">Step 2: Add the modifiers:</span></p><p><span> </span><span> </span><span> </span> We are not adding any separate modifiers for the approval process. So, we directly write the function to handle the deposit. </p><p><span style="color: #f9cb9c;">Step 3: Finalize the Functions:</span></p><p><span> </span><span> </span><span> </span> The deposit function takes the address where the money will be sent to and make it external </p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjKQ6TdNRK-puAq3YGXadBoKeP4RWJ-GZ5rSY0zanchq_T_yz3AKxiSm_kQEus2NzVOKtHEsrwVHneiIBIZGa3cRxVa1unlhpAgfe3JsFSUQ8qpoXtt6syhAy1XnA4IB2ICqsD55X2ldTw/s791/smartcontract2.jpg" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" data-original-height="520" data-original-width="791" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjKQ6TdNRK-puAq3YGXadBoKeP4RWJ-GZ5rSY0zanchq_T_yz3AKxiSm_kQEus2NzVOKtHEsrwVHneiIBIZGa3cRxVa1unlhpAgfe3JsFSUQ8qpoXtt6syhAy1XnA4IB2ICqsD55X2ldTw/s320/smartcontract2.jpg" width="320" /></a></div>that allows being called outside of the contract. The modifier payable allows taking the ether. When these functions are called, the special variable messages contain the data include the sender(msg.sender) who is sending the money and the value(msg.value) that is being sent. Write the require that the message value greater than zero to deposit the ether. Then, set the message sender equal to msg.sender. This is going to be saved in the smart contract and will be written to the blockchain. Also, set the receiver as well. <p></p><p></p><p></p><p> The ViewApprover function going to be external and pure. It's a lightweight call that does not cost the gas which will return the approver address. </p><p> Finally, the approve function will release the money and send it to the receiver which is an external and payable function. The approval is possible when the address of the sender that matches the approver and the transfer function will be used to send the money or you can use the send method which will return false or true if it passes. The transfer function called where the money is being sent to and gets the address of the contract with the balance so that to return the balance stored in the contract and send it to the recipient. The solidity is like javascript, but it is a deeper language that needs lot of best practices.</p><p></p><div class="separator" style="clear: both; text-align: center;"><br /></div><span style="color: #f9cb9c;">Step 4:Compile and Migration:</span> <div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjIMEIFlqSmmP2_iimtN68XOvDSsJiWbn7N1FvFsP0qL-taGfAH_b-VEtX3UCVwid-tYWhtrNg3375tJcrhHJWvGYFs4x6ckdtJFU8DZQxWaFSHYLQO0sCNXrMV7KbPptSTm2QA8OE-3dQ/s502/migrations.jpg" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" data-original-height="502" data-original-width="372" height="263" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjIMEIFlqSmmP2_iimtN68XOvDSsJiWbn7N1FvFsP0qL-taGfAH_b-VEtX3UCVwid-tYWhtrNg3375tJcrhHJWvGYFs4x6ckdtJFU8DZQxWaFSHYLQO0sCNXrMV7KbPptSTm2QA8OE-3dQ/w237-h263/migrations.jpg" width="237" /></a></div><div><p> Once the solidity code is compiled successfully by the command "truffle compile" in your truffle directory, you can migrate that build which is stored in the build directory by migration script. So, you need to set up the Truffle configuration file(truffle.js) by just uncommenting the development network. You need to create 2_deploy_contract.js in migrations and create a variable of ApprovalContract to deploy the contract. You can just enter the command "Truffle develop" in cmd which will give the truffle prompt. When the truffle prompt is active, just enter the command " truffle migrate development" in another command line. Now the contracts are deployed to the network.</p><p><span style="color: #f9cb9c;"><span><br /></span></span></p><p><span style="color: #f9cb9c;"><span>Step 5: </span> Test your contract with Truffle:</span> </p><p> The smart contract needs to be fully tested before deploying to the mainnet because of the immutable nature of the blockchain and cryptocurrencies are working on financial transactions. The Truffle allows writing some test javascript to ensure things are working correctly.<span> </span></p><p> The test javascript in the Truffle server test directory</p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiqOguMEIryXlIAh7hgdNaS1gs4D31mzsqoM7en1i2rQFLk3_XJ2Hn3oDZRXSSIFZFdvbgEK5L5lavTU9_Y8ysMdZ-gkxe70Wg_bhKm-xQjK5AMjyHUGpop7FiZ8KXph19q1-XI5XS9wNk/s956/test.jpg" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" data-original-height="212" data-original-width="956" height="106" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiqOguMEIryXlIAh7hgdNaS1gs4D31mzsqoM7en1i2rQFLk3_XJ2Hn3oDZRXSSIFZFdvbgEK5L5lavTU9_Y8ysMdZ-gkxe70Wg_bhKm-xQjK5AMjyHUGpop7FiZ8KXph19q1-XI5XS9wNk/w320-h106/test.jpg" width="320" /></a></div>(ApprovalContract.js) will be compiled by the command "truffle test" in the truffle development environment. If it is successful, you can see the ApprovalContract, initiates the contract, and 1 passing message in the command line.<p></p><p> </p><p><span style="color: #ffa400;">Basics of Web3.js:</span> <a href="https://web3js.readthedocs.io/">Web3.js</a> is a javascript library to interact from client-side applications to </p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgfpGJTHwLIFSE4cnW_D_NB7sM8atoo6Y5v9DxA2eqzx6d5AS9jvi8wUAUEspmdjlxhikBvoqy8kZ7T_jSQpQ9P0FqiiGMp4IPn65ubGCXi9FH0UQzdPoUZyysIj4ujwsnGHzK6zRVf9TE/s645/script.jpg" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" data-original-height="247" data-original-width="645" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgfpGJTHwLIFSE4cnW_D_NB7sM8atoo6Y5v9DxA2eqzx6d5AS9jvi8wUAUEspmdjlxhikBvoqy8kZ7T_jSQpQ9P0FqiiGMp4IPn65ubGCXi9FH0UQzdPoUZyysIj4ujwsnGHzK6zRVf9TE/s320/script.jpg" width="320" /></a></div>Ethereum network. You can install it using the node or <a href="https://github.com/ethereum/web3.js/tree/1.x/dist">CDN</a> (web3.min.js) <span>and save it to your project web directory. This will help you to send Ether, get transaction details, get balances, interact with a smart contract, add a signature to transfer etc., The </span>web3.eth will used to interact with ethereum blockchain and ethereum smart contract. You can check other packages for the Dapp development. Now, we can plug web3 into Dapp. First, add web3.js script in Dapp and create another script block. Create the instance of web3 and pass HTTP web3.providers, 9545 is the address of the truffle server. If you enable the interaction with metamask, metamask will inject its own web3. So, the code will be written for the current provider to use web3.<p></p><p></p><p></p><p></p><p><span style="color: #ffa400;">Building a Dapp:</span> We will use the Visual Studio Code to build and run the Dapp. Once you have </p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiyucykwLRKZmBh2ISyP5gsCWzFEF8iBVxnKTqIEgqr3LN2VfYgxrvYRCsjoqEI9MLUvc-5ezoxaMR2p3AOlaTmwqzmsIz8B0Ls3wljZC9NHGSeo7M_FqKv3pJdP34fC-hw9dXvc-0yNMg/s1056/contract.jpg" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" data-original-height="619" data-original-width="1056" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiyucykwLRKZmBh2ISyP5gsCWzFEF8iBVxnKTqIEgqr3LN2VfYgxrvYRCsjoqEI9MLUvc-5ezoxaMR2p3AOlaTmwqzmsIz8B0Ls3wljZC9NHGSeo7M_FqKv3pJdP34fC-hw9dXvc-0yNMg/s320/contract.jpg" width="320" /></a></div>connected to the smart contract, write the script to verify the people entered the correct ethereum address. So, you need to override the submit function of the form's(contract-form) default action and test the functionality that you want. The web3.utils package's isAddress function will help to validate the Ethreum address. Then, you can check out the balance by the package web3.eth.getBalance(contract address). You can get the contract address when the truffle development server is running with the command "truffle migrate --reset".<p></p><p></p><p><span style="color: #ffa400;">Working with Smart Contract:</span> The power of Dapp is the smart contract which is our back-end interact with UI. So, the first thing we need is to get the ABI(Application Binary Interface). It is a JSON encoding of smart contract that can be used as a client stub in a web service. You can get ABI in many ways. Those are,</p><p><span style="color: #fce5cd;">1. Get the ABI property from Etherscan</span>: If you go to contract in Etherscan, you can see the ABI and copy it directly</p><p><span style="color: #fce5cd;">2. Using Remix to get ABI</span>: Remix output an ABI that you can grab and paste it in your project</p><p><span style="color: #fce5cd;">3. Get ABI from Truffle Build Directory</span>: In Truffle's build directory, you can see the contract and the <contract name>.json and you can see the ABI.</p><p> So, copy the abi and put this script directly into contractABI.js file with a var abi in your project directory and write the included script in the index.html file. Now, you can interact with a smart contract.</p><p> First, you need to create an instance of the contract that will use ABI. So, create the </p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjKlqRDjS33fe50ttff9RAf9SMyMN9jmpF8jPEUzryNsfkJ6d3ZWios9FF03VA13OwlyXS-u2VGrGbo1J9dua7neXYAqOVWPok8TFz3vTJWJW1Yi71jZQvS3sZfjSelzoqDguwV58XHIcE/s980/transaction.jpg" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" data-original-height="393" data-original-width="980" height="201" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjKlqRDjS33fe50ttff9RAf9SMyMN9jmpF8jPEUzryNsfkJ6d3ZWios9FF03VA13OwlyXS-u2VGrGbo1J9dua7neXYAqOVWPok8TFz3vTJWJW1Yi71jZQvS3sZfjSelzoqDguwV58XHIcE/w320-h201/transaction.jpg" width="320" /></a></div>ApprovalContract variable and call new Web3.eth.contract and pass the ABI and address. Once, you set up, run the dapp and make sure it is using the abi in the debug console. Once you validated, you can use the ApprovalContract methods and call on deposit and pass toaddress. The send function will be used to send ether. After the object is created, write a function to check for error and return the transaction Id if it is successful. <p></p><br /><p><br /></p><p><br /></p></div>Balamurugan Dhttp://www.blogger.com/profile/13078851936106035408noreply@blogger.com0tag:blogger.com,1999:blog-2943630978943594838.post-13274184922928782352020-09-14T22:09:00.014-07:002020-09-23T04:32:31.313-07:00What are the basics in DevOps: CI/CD Jenkin?<p></p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh80rSact6YkGRIgKcL_FvyaXm6YgH420kja1cmTsWTaeuRV9SWua-iCkt9Zqkmhul1yY9_Q5_QKIGg0GEW_IC5_WDPkaEWsoPvsmhr6LFblq_FPOPz7vp3UMVquFb4nwKTULYQ5csK0sM/s800/devops2.jpg" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="600" data-original-width="800" height="441" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh80rSact6YkGRIgKcL_FvyaXm6YgH420kja1cmTsWTaeuRV9SWua-iCkt9Zqkmhul1yY9_Q5_QKIGg0GEW_IC5_WDPkaEWsoPvsmhr6LFblq_FPOPz7vp3UMVquFb4nwKTULYQ5csK0sM/w588-h441/devops2.jpg" width="588" /></a></div><br /> <p></p><p> The practice in software engineering for automating the code periodically is called Continuous Integration(CI). The resulting binaries after building the code which can be deployed to the target environment. This practice is called continuous delivery. It aims at building, testing, and releasing software faster and frequently. In the regular software development lifecycle, the team members checking their code regularly in a source version control system like Git repository. The CI server fetches these changes and integrates them. It runs the build process and produces the bundle of artifacts. These artifacts are versioned and stored in the build repository. CI server also runs unit test and integration tests etc., It runs the entire process based on several triggers many times a day and sends the notification of success or failure. There are various products, tools, and frameworks available to implement CI and CD pipeline in our environment. The most popular are Bamboo, Circleci, Continuum, Travis CI, Cruise Control, TeamCity, anthillpro and Buildbot etc., You can host the CI server into the network infrastructure and other products completely hosted in the cloud. It depends on the business and technical requirements of the project. </p><p><span> </span> When practicing CI, the developer commits their code into the version control repository. It is easier to identify the defects and other software quality issues on a smaller code than larger code developed over the period. The teams implementing the CI, start with version control configuration and practice definitions. The features and fixes are implemented in both short and long time frames. When the feature is complete, the developer merges the changes from the feature branch to the primary development branch. CI packages all the software, database components, and testing. Continuous delivery is the automation that pushes the application to delivery environments. The development teams have development and testing environments. The CI/CD is designed for businesses that want to improve the application delivery process. It has standardized builds, develop tests, and automate deployments for deploying changes in the code. The operations team can see greater stability in the standard configuration environment.</p><div>The essential CI/CD principles are,</div><p> * The system has been architected that supports the iterative releases and metrics help to detect issues in real-time. </p><p> * The test-driven development keeps the code in a deployable state and works in small iterations. </p><p> * Developers push the code into production and ensures the new version of the software will work when it gets in the hands of the users. Anyone can deploy any version of the software at a push of a button. </p><p> * The engineering team should be responsible for the quality and stability of the software they build.</p><h2 style="text-align: left;"><b><span style="color: #ffa400;">The advantages of CI/CD are,</span></b></h2><p> 1. It detects problems or bugs as early as possible in the development cycle. The risks will be minimized by integrating all the changes from the key members on a frequent basis.</p><p>2. As new changes and features are introduced into the source so that the team can competently accommodate these changes.</p><p>3. The entire code basis can be integrated, test, and deployed to sufficient frequency that shows the error earlier on the cycle. The feedback will help to build better and quality software. </p><p>4. Since the build is automated and the source is integrated which can be continuously deployed. There is no delay in building the artifacts and satisfying the customer. All builds are systematic build number. This results in a fully trackable build and deployment process. It gives us the ability of any particular version of the software and traceback exact commits or version of the system. </p><p>5. The CI and CD processes have valuable information about source code. It helps to analyze the broader perspective of the team. Is the code coverage increasing over time? </p><p>6. Since the build, integration, and deployment are automated with the help of CI and CD, the team can ship the product faster and more consistently. Some organizations take automation to the extreme and ship the code several times a day. </p><h2 style="text-align: left;"><span style="color: #ffa400;">DevOps Practices:</span></h2><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjjZ_YbrDnNEyr9lNTHp-RyHGv37ArytIVVBQ8GD4aIxr4SpfX8iDWOcoFNPnstgGQoCv3jSgmfpaKFM0ua-hYgnfzRblCRhnawnXZvrVRrvrYGJGk2WBI1cANBCMzSMpF4ENSwMlShiNw/s390/devops.jpg" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="259" data-original-width="390" height="336" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjjZ_YbrDnNEyr9lNTHp-RyHGv37ArytIVVBQ8GD4aIxr4SpfX8iDWOcoFNPnstgGQoCv3jSgmfpaKFM0ua-hYgnfzRblCRhnawnXZvrVRrvrYGJGk2WBI1cANBCMzSMpF4ENSwMlShiNw/w505-h336/devops.jpg" width="505" /></a></div><p> Software Teams perform several tasks frequently such as writing the code, testing, staging, moving the code from development to production environment. Performing all the tasks manually consumes a lot of time and the likelihood of mistakes in the configuration issues and customer frustration. The DevOps process overcomes the challenges in the manual approach. As soon as the new code approaches the source code repository, it can be automatically built, and the test can be run to validate the code. It ensures to run in an integrated manner. Once the code is validated, we can go with our build, Dev, QA, and production deployment. DevOps culture enables and triggers a series of incremental process improvements. Implementing the DevOps culture infrastructure can help you streamline the development process and detecting and fixing the bugs faster. It provides a useful project dashboard, and the team can deliver more business value to user.</p><p> Jenkin is a popular tool for CI/CD and a continuous deployment framework for any platform and technology. It is a low-cost open-source platform and a large active community. It is widely used from small startups to large corporations. This open-source automation tool with plugins built for DevOps purposes. Plugins allow the integration of various DevOps status like configuration management, version control, build, continuous monitoring etc., It can trigger the build for every change made in the source code repository. Once the code has been building, Jenkins deploys on the test server for testing. The teams are constantly notified for build and test results. Finally, it deploys the build application to the production server. Jenkins supports 1000's of plugins. The main advantage of Jenkin is to increase developer productivity and brings agility to the development process by automating important tasks. </p><p> It uses master and slave architecture to manage the distributed build. Master and slave use TCP/IP protocol. The Master's job is to pull the SCM repository from git to subversion. It schedules the build jobs and dispatches to the slave for the execution. Primarily, the master's job is to orchestrate the job across the whole cluster. The master instances can execute the build job directly. The slave Jenkin's instance is a java executable that can run on a remote machine. It accepts a request from Jenkin's master instance and executes the job and report back to status of the master. Slaves run on a variety of OS and additional slave nodes can be added to the architecture as the system grows.</p><h2 style="text-align: left;"><span style="color: #ffa400;">Jenkin Installation:</span></h2><p> In the <a href="https://www.jenkins.io/">Jenkin</a> website, open the plugin and you can see 1500+ community plugins. The important plugins are SSH, Git, marvel that you can find and use it. Jenkins can be installed in your Ubuntu Virtual Machine or GCP. To install Jenkin, Open <a href="https://console.cloud.google.com/">GCP</a> and create new instance for Jenkin to test CI. In connect SSH, open the new browser window and execute the commands. Jenkin must have the latest version of java application. Once you installed JDK by the command sudo apt install default-jdk, open <a href="https://www.jenkins.io/doc/book/installing/">Jenkin</a> documentation and run Debian/Linux commands.</p><p></p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhjPIZFIHRpQnVTjZFymit6WdTCniWTL57F5vUHPrt7zZ4fA3WF7_Ywo75_RE7Ur5pWVaoHZZTpI1xSktRg_Lh7oQG1fpGY3AERiUaNKC376iZByG_TiB12zfokbSpgm1Qm_slMVHBooYk/s1365/Jenkin.jpg" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="599" data-original-width="1365" height="290" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhjPIZFIHRpQnVTjZFymit6WdTCniWTL57F5vUHPrt7zZ4fA3WF7_Ywo75_RE7Ur5pWVaoHZZTpI1xSktRg_Lh7oQG1fpGY3AERiUaNKC376iZByG_TiB12zfokbSpgm1Qm_slMVHBooYk/w662-h290/Jenkin.jpg" width="662" /></a></div><div class="separator" style="clear: both; text-align: center;"><br /></div><div class="separator" style="clear: both; text-align: left;"> Once you successfully executed the commands, create the Firewall for Jenkins 8080 port and click the external IP which will open the getting started Jenkins Pages. Jenkins is a secure server, it needs an authenticated user to login and works with them. To unlock Jenkins and run securely, you must provide the default admin username and password in order to log in. Once you entered the secret password, it will securely redirect to the setup wizard and open your dashboard page. This is the central place to manage plugins and all the jobs.</div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiWi1MOId3qvNvVYSstQsjSshIaw5sHhHkNcO1R10Kp2Pk72w7BW2OcWhPCM7j3A73u2wPB_qwQhBP1jPSjHRXngECjygcmQuH7FmfbxpbaRe1f3yLkmPdZ1hyphenhyphenarVL2ea26GqnCD9J-iC0/s1024/jenkins.jpg" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="763" data-original-width="1024" height="327" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiWi1MOId3qvNvVYSstQsjSshIaw5sHhHkNcO1R10Kp2Pk72w7BW2OcWhPCM7j3A73u2wPB_qwQhBP1jPSjHRXngECjygcmQuH7FmfbxpbaRe1f3yLkmPdZ1hyphenhyphenarVL2ea26GqnCD9J-iC0/w591-h327/jenkins.jpg" width="591" /></a></div><br /><div class="separator" style="clear: both; text-align: left;"> Now, you can manage plugins and create the jobs. In the global tool configuration, you can easily configure various tools, locations, and installers for the build job in Jenkins so that to work on your automation project. Jenkins pipeline helps you to build the process of the project by specifying the tasks and order in which they are executed. For ex, build assets, send an email on error, send the build artifacts via SSH to your application server etc.,</div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;"><br /></div><h2 style="clear: both; text-align: left;"><b><span style="color: #ffa400;"> Creating Job in Jenkin: </span></b></h2><div class="separator" style="clear: both; text-align: left;"> In order to create a new job, click the new item. Once you entered the item name and select the freestyle project. This will create a new job in Jenkins. Basically, the job is an actor. For ex, In order to create a simple job, select config and update the build tab command as echo Hello World and execute shell batch command and save. Now, the job is configured and ready to run it. To run the job, select the job name and build it now. The job has run successfully, you can see some links. In the console output, you can see the command that we put as echo "hello_world".</div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;"> Jenkin is a highly scalable framework that can handle 1000 builds simultaneously for large teams or sub-teams. It also supports a plethora of technologies, languages, and frameworks. So, we need to build a scalable and distributed clusters of Jenkins nodes for a demanding level of efficiency. You must think about the master-slave architecture supported by Jenkins when scaling out. The master node is the central controller node which co-ordinates and assign build jobs across the cluster of nodes. People accessing Jenkins as a web interface running on the master nodes. Running and managing the build jobs are demanding tasks in Jenkins. Ensure that the jobs are not running in master node and made regular backup configuration in master nodes. Slave nodes are quick to set up and should be utilized to run multiple jobs. Moreover to clean up unnecessary plugins. These plugins clutter the whole system and may cause an unforeseen issue. </div><p></p>Balamurugan Dhttp://www.blogger.com/profile/13078851936106035408noreply@blogger.com1tag:blogger.com,1999:blog-2943630978943594838.post-22443843350124892222020-07-31T08:44:00.001-07:002020-07-31T08:44:49.308-07:00onlineclass<iframe allowfullscreen="" frameborder="0" height="270" src="https://www.youtube.com/embed/qdwgj-snahM" width="480"></iframe>Balamurugan Dhttp://www.blogger.com/profile/13078851936106035408noreply@blogger.com0tag:blogger.com,1999:blog-2943630978943594838.post-28049809730920682532020-07-14T23:07:00.000-07:002020-07-14T23:11:41.053-07:00What are the fundamentals of building the web application using Corvid by Wix?<div dir="ltr" style="text-align: left;" trbidi="on">
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhLj-XboosPtkxCUTVt1HdbRx6bYcoBkMx9pYGSoUHfwzPCzLWo8thaNDNXMpTDZZ5UwqUuCv7vZiqxzkGvFgQ0xMT7j8GPEWJEUEMwVwz1LQiatmnKj-jQD2_nkBd0nzBbgv9Exyu5qd4/s1196/wix.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="509" data-original-width="1196" height="266" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhLj-XboosPtkxCUTVt1HdbRx6bYcoBkMx9pYGSoUHfwzPCzLWo8thaNDNXMpTDZZ5UwqUuCv7vZiqxzkGvFgQ0xMT7j8GPEWJEUEMwVwz1LQiatmnKj-jQD2_nkBd0nzBbgv9Exyu5qd4/w625-h266/wix.jpg" width="625" /></a></div>
<br />
<div>
<br /></div>
<div>
Corvid is an open development platform that accelerates the way to build web applications. It empowers the Wix visual builder with tools, customs functionality, and interactions using Corvid APIs. It is an open extendable platform for both front-end and back-end. Corvid adds built-in IDE to Wix sites and you can code directly in the Wix Editor. Enable the Corvid on your website by clicking the Dev Mode in the site's top bar. To build the application, you need to familiarize yourself with all the features and decide all the functionalities that work for you. So, follow these steps to get familiar with the basic structure and syntax of Corvid.<br />
<div>
<br />
<h4 style="text-align: left;">
Step 1: Create a new Wix Website: </h4>
<div>
Sign in to your Wix account and open a blank template in the editor.</div>
<h4 style="text-align: left;">
Step 2: Enable Corvid: </h4>
<div>
Enable the Corvid in the Wix editor to work with code on your site.</div>
<h4 style="text-align: left;">
Step 3: Add Elements to the Page: </h4>
<div>
On the left side of the editor, click add, then add the page elements, and set its ID in the properties panel that appears on the right side of the Editor. </div>
<h4 style="text-align: left;">
Step 4: Add code: </h4>
<div>
1. Open the code panel at the bottom of the editor</div>
<div>
2. you can use this link for the API and code examples, "<a href="https://www.wix.com/corvid/reference/">https://www.wix.com/corvid/reference/</a>"</div>
<div>
3. add the code to define the variables, OnReady() and define the functions etc.,</div>
<h4 style="text-align: left;">
Step 5: To See it in Action:</h4>
<div>
Finally, Click Preview at the top right of the editor. Then, check the result and publish your site live and production-ready. </div>
<div>
<br /></div>
<div>
The following principles are applied when developing the application on Corvid.</div>
<div>
<h3 style="text-align: left;">
1. Coding: </h3>
<div>
Corvid supports the Javascript code. So, add the Javascript code to the Wix site and work with <a href="https://www.wix.com/corvid/reference/">APIs</a> for custom functionality and interactions with the site. Corvid is a server-side run time system based on Node.js. you can export functions from the back end to the front end by <a href="https://support.wix.com/en/article/corvid-javascript-support#module-support">ES2016</a> and Corvid web modules. It will work with Wix apps front-end and back-end events and their elements. </div>
<h4 style="text-align: left;">
Page Rendering Process:</h4>
<div>
<div>
Corvid API empowers you to take full control of the site's functionality. The API's will be used to interact with page elements, site database content, and external services. To use the APIs, you need the working knowledge of javascript and ES2017 features. </div>
<div>
</div>
</div>
<div>
Page section of the site lists all the regular, <a href="https://support.wix.com/en/article/about-dynamic-pages-and-which-items-they-display" target="_blank">dynamic</a>, and <a href="https://support.wix.com/en/article/creating-a-router">router</a> pages on your website. Regular pages appear immediately beneath the page section title. You can change the page setting when you hover over the page name. The dynamic pages with the same prefix grouped together in the same section. You can add a dynamic page to a group by clicking the settings icon that appears when you hover over the section name. Dynamic Pages change their content from one-page design to many live site pages. There are two types of dynamic pages. Item pages display one item from your content collection. The other type of dynamic pages is a category page like menu to link to your item pages. When you add the dynamic page to your site, both the item and category dynamic pages are added. The pages are added with dataset connectors automatically. The dataset connectors connect the pages to your collections. Router allows complete control when handling certain incoming requests to your site. Enter the URL prefix for the router and click add and edit code. All the incoming request from the URL prefix will be sent to the router for handling.</div>
<div>
</div>
<div>
The page needs to set up before rendering it to the visitor. Setting up a page includes adding and positioning the elements and running the code that retrieves the page data or performs other setup operations. This is called rendering. So, the onReady() function runs in both the server and the browser. While a page request is running in the server, the browser downloads and executes the same code. After the server returns the response and the page is visible, the browser renders the page and makes it interactive. So, the onReady() function will run twice. Once in the backend and once in the browser. You need to explicitly add code to prevent the side effect like inserting an item twice into the collection. The Rendering API makes sure the parts of code runs only once. The wix-window property used to track where your code is running. The variable "env" gets the current environment that returns "backend" when rendering on the server and returns "browser" when rendering on the client. So, you need env property, when you use insert() function of the wix-data API to add an item of your collection only once.</div>
<h3 style="text-align: left;">
2. Databases: </h3>
<div>
If you enable the Corvid, it will automatically add the <a href="https://support.wix.com/en/article/about-the-content-manager-7160473">Wix data</a> to your site that lets you work with built-in databases. Wix apps have amazing features that can enhance and grow your business. As you add as many Wix apps to your site, their data automatically added as a new collection on your site. Once you enabled the databases on your site, you can use Wix visual builder to <a href="https://support.wix.com/en/wix-data/connecting-content">connect your data</a> to elements on your site like capture user input and create dynamic pages. Also, you can create dynamic pages using <a href="https://support.wix.com/en/article/wix-code-creating-a-router" target="_blank">custom router</a>. Corvid supports connecting the external database using the <a href="https://www.wix.com/corvid/reference/external-database-collections.html" target="_blank">External Database SPI</a> and work with the database in your site with our built-in collections.</div>
<div>
</div>
<h3 style="text-align: left;">
3. Importing Data into the Content Manager:</h3>
<div>
You can create and store content in collections using the content manager. Connect your collections to various elements in the editor to display the content or form submission. You can store the site's content in content collections. This could be the content you create, content you capture, or both. The content is stored in a grid layout made up of items(rows) and fields(columns). When you create a new collection, you can choose preset collection or scratch. Once, you added the content manager, certain editor elements like text, images connect to your new content collections.</div>
<div>
If you have stored your information in a spreadsheet, you can import data into database collections from CSV file. You can export data by making changes in the spreadsheet application and import it again to the database collection. Also, you can import to Sandbox database in the Editor and Live database in your dashboard. When importing to the sandbox database, you can add new fields to the database structure at the same time. But, when you are importing to the live database, you can import fields that are part of the database structure. </div>
<h4 style="text-align: left;">
Forms:</h4>
<div>
Members area lets your visitor register and become the members of your site. There are 3 types of membership forms. Those are,</div>
<div>
1. Default Signup form</div>
<div>
2. Custom Signup form</div>
<div>
3. Corvid Form</div>
<div>
In the editor, just click on the Menus & Pages, then click the member signup form and choose the type of form. In the Corvid form, you can select your form the "what does it link to" drop-down menu. The custom forms are site-specific needs by adding the user input elements and content manager to your site. Follow these steps when creating the custom form with user input,</div>
<div>
<br /></div>
<h4 style="text-align: left;">
Step 1: </h4>
<div style="text-align: left;">
Start create the collection to store the form's submission information</div>
<h4 style="text-align: left;">
Step2: </h4>
<div style="text-align: left;">
Add the user input elements to create a form on the page</div>
<h4 style="text-align: left;">
Step3: </h4>
<div style="text-align: left;">
Setting each element using its setting panel</div>
<h4 style="text-align: left;">
Step4: </h4>
<div style="text-align: left;">
Add and Setup a dataset to connect your page elements to your collection</div>
<h4 style="text-align: left;">
Step5: </h4>
<div style="text-align: left;">
After setting up a dataset, connect your elements to it</div>
<h4 style="text-align: left;">
Step6: </h4>
<div style="text-align: left;">
Add a submit button and connect to your dataset that allows the user to submit the information to your collection.When the visitor submits the form, you can see the information in your inbox,</div>
<div style="text-align: left;">
* email account you set in your form settings</div>
<div>
* the submission table in the dashboard</div>
<div>
<h3 style="text-align: left;">
4. Open Platform: </h3>
<div>
Corvid extends the website functionality to other services like installing <a href="https://support.wix.com/en/article/corvid-working-with-npm-packages">NPM packages</a>, use our fetch to call external APIs, and expose the site's functionality as an API with our<a href="https://support.wix.com/en/article/corvid-exposing-a-site-api-with-http-functions"> HTTP functions</a>. You can bring the complex functionality to your site with the packages from the software registry. In npm, each reusable library of code is known as a package. Once, you installed in site structure, you can import the package and use it in your code.</div>
</div>
<h3 style="text-align: left;">
5. Secret Manager:</h3>
</div>
<div>
Corvide allows you to integrate third-party services like API key to your site's code. The 3rd party services require an API key for authentication. This service provides the key which you can add to the code that calls the service. It lets you store secrets such as API keys. It is safely stored and encrypted in the Secret Manager in the Site's Dashboard. The procedure to work with API keys are,</div>
<div>
1. First, get the private information as an API key from the 3rd party service.</div>
<div>
2. Store the private information with a new name in the secret manager</div>
<div>
3. In the backend code, the value of the secrets extracted using the getSecret() function instead of the API key. </div>
<div>
<br /></div>
<h3 style="text-align: left;">
6. Web modules:</h3>
<div>
Web modules enable you to write functions that run server-side in the back end and call them in client-side code. With web modules, you can import functions from the backend into files. You would need web modules functions code to run client-side or want to access other web services. You can only export the functions from web modules. For ex, if you want to enable a site visitor to send an email, you need to write the function that sends an email server-side. Because the code may have security issues if it runs client-side or web services. You can import the web module function into another module in the backend or front end file.</div>
<div>
<br /></div>
<div>
</div>
<div>
<br /></div>
<div>
<br /></div>
</div>
</div>
</div>
Balamurugan Dhttp://www.blogger.com/profile/13078851936106035408noreply@blogger.com0tag:blogger.com,1999:blog-2943630978943594838.post-67460579972360785082020-07-08T05:03:00.001-07:002020-07-08T05:03:12.511-07:00Atlassian Codegeist'20<iframe allowfullscreen="" frameborder="0" height="270" src="https://www.youtube.com/embed/jKp87Sop6MU" width="480"></iframe>Balamurugan Dhttp://www.blogger.com/profile/13078851936106035408noreply@blogger.com0tag:blogger.com,1999:blog-2943630978943594838.post-22944895332823465112020-06-15T06:03:00.007-07:002020-06-17T03:12:27.391-07:00What are the basics in the web application for production environment?<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh6hfkv-Owoq6LyJ0lWXFpF5v8fyqVFXsVRsN54CsNioG-5B9C55pacQlLCkfKmwfWp6x240mwJalsKGQ8BMqLeL91Yhx99umWJKG8uimhn5BbGhU9cwntT0RIR8NTOx7VFq8iB1d2agRg/s700/UX.jpg" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" data-original-height="300" data-original-width="700" height="268" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh6hfkv-Owoq6LyJ0lWXFpF5v8fyqVFXsVRsN54CsNioG-5B9C55pacQlLCkfKmwfWp6x240mwJalsKGQ8BMqLeL91Yhx99umWJKG8uimhn5BbGhU9cwntT0RIR8NTOx7VFq8iB1d2agRg/w625-h268/UX.jpg" width="625" /></a></div><div class="separator" style="clear: both; text-align: center;"><br /></div><h1 style="text-align: left;"><font color="#b51200"> UX/UI Design:</font></h1><div><div><span style="font-size: large;"> </span> <font size="4"> </font><span style="font-size: large;">Every time you interact with users, and you have access to precious information. </span><span style="font-size: large;">We can design a useful and usable system if you have a good understanding of the individuals and how they currently complete the given task. The user experience making the product or service easier to use and optimizing the experience of the user. Generally, UX work should support inside your organization. If the UX does not support the infrastructure in the enterprise, the harder you have an impact and influence on the design. If you are a UX designer, you need to think about the work more efficiently or have a larger impact on the organization. There are 3 fundamentals in good design. Those are,</span></div><div><span style="font-size: large;"> 1. Affordance</span></div><div><span style="font-size: large;"> 2. Signifiers</span></div><div><span style="font-size: large;"> 3. Feedback</span></div><div><span style="font-size: large;"> The good designs are characterized by effective, efficient, and satisfying to the user. The interface is usable if the user can understand the input that will lead to the desired output. Affordance relates to the output of the user interface. It</span><span style="font-size: large;"> refers to the perceived and actual properties of things or the fundamental properties that determine the things that could possibly be used. The affordance tells us what actions are possible and the signifier tells us where the action takes place. A signifier includes a textual label. </span><span style="font-size: large;">Usable design requires the output of the user actions to be visible. So, it is related to the feedback. Feedback requires sending back the user information about what system input has occurred. So, we design and build to the people in 2 categories. Those are,</span></div><div><span style="font-size: large;"> 1. Business to Business(B2B)</span></div><div><span style="font-size: large;"> 2. Business to Consumers(B2C)</span></div><div><span style="font-size: large;"> B2B people are visiting the website to enable the business tasks and business activities. They use it for doing it during the day. So, when you are designing for B2B users, we want to know their job roles and daily responsibilities. The process and the issues that they encounter are matter to you. You need to figure out the specific process, tasks, or end goal and the problems or obstacles or inefficiencies they deal with.</span></div><div><span style="font-size: large;"> If the product is directly used by the consumer, then it is called as B2C. In B2C, we are not asking the daily work habits, company goals, and assignments. Those factors have less influence on their use of our website or purchasing decisions. We are not solving problems or suggest solutions. We need to figure out what it frustrates, tasks that puts-off, often visiting the website, and use the website. When you walk through their process and you can get the clue of something. So, when you do on your website that eliminates the use of their thing. </span></div></div><div><span style="font-size: large;"><br /></span></div><h2 style="text-align: left;"><font color="#b51200">Basics of HTML:</font></h2><div><span style="font-size: large;"> The user does not care about the HTML code. But, they are interested in the final Web Page. The web browser translates the HTML code to the web page. Different HTML versions take different actions in your browser. So, declare it as < ! DOCTYPE html> in the very first statement of the html file and should not leave any whitespace before it. But the version declaration of XHTML is complicated. There is Strict and Transitional version declaration in XHTML. You can get more information in the declaration document. There are 3 audiences in the HTML code. Browser translating our code to HTML page. The browser is the first audience and the website user is the second audience. Nowadays, website users are brought by search engines. So, SEO is the third audience. The meta tag carries the information of the search engine browser, and not for the website user. There are 2 types of meta tags,</span></div><div><div><font size="4"> name META eg., <meta name=' ' content=' ' /></font></div><div><font size="6"> <meta/>{</font><font size="2"> </font><font size="2"> </font></div><div><span style="font-size: large;"><span> </span><span> </span><span> </span><span> </span><span> </span><span> </span><span> </span><span> </span><span> </span><span> </span>http-equiv META eg., <meta http-equiv=' ' content=' ' /></span></div><div><span style="font-size: large;"><br /></span></div><div><span style="font-size: large;">Both will work on the attribute of content. The name meta mainly provides the information to the search engine. There are 6 kinds of commonly used name meta. Those are,</span></div><div><span style="font-size: large;"> 1. Keywords</span></div><div><span style="font-size: large;"> 2. Description</span></div><div><span style="font-size: large;"> 3. Author</span></div><div><span style="font-size: large;"> 4. Copyright</span></div><div><span style="font-size: large;"> 5. Generator</span></div><div><span style="font-size: large;"> 6. Robot</span></div><div><span style="font-size: large;">The keywords and descriptions are used to describe the content of the web page and act as a name card. The search engine takes these meta tags into considerations when analyzing the web page. The ownership of the webpage described by the name Author and Copyright. The generator meta describes backend technology. For ex, if the website is powered by django, you can set the content attribute value to django. The robot meta is important to decide that the webpage is open for the search engine or not. We can set to "all" for the maximum SEO results and set to "none" for maximum privacy. The http-equiv META can be used to auto-renewal and redirecting the webpage. For ex, </span></div><div><span style="font-size: large;"> <meta http-equiv='refresh' content=5;http://www.creativewebgraphic.com/></span></div><div><span style="font-size: large;">This will redirect our web page to another website by 5 sec later.</span></div><h2 style="text-align: left;"><font color="#b51200">Creating Web Pages: </font></h2><div><span style="font-size: large;"> The web pages display the 4 kinds of information like text, images, videos, and audios. Developing the complete website from top to bottom is the static website. But, the dynamic websites do not have real web pages. The web pages are generated from the back-end program. So, it does not require a complete web page. The way dynamic websites create their web pages is lots like filling the forms. Each form is an empty mini page template. Based on the user request, it will retrieve data from the database and put it into the corresponding mini template. In the end, all filled mini templates will be assembled in a pre-defined order. This is how dynamic web sites create their web pages. The front-end development for dynamic websites is mainly about creating many templates.</span></div><div><span style="font-size: large;"> The webpages are made up of 3 parts like header, footer, and main body. In the header, you can see the logo, navigation bar, and search box. If the website contains the registration and login system, the header has register and login button. The copyright declarations, contact us, the privacy policy is included in the footer area. Most of the information is displayed in the main area. The websites can have the sidebar for better navigation. The main area can also be divided into several mini templates. The use of the same mini template repeatedly is the typical characteristic of a dynamic website. So, we only need to develop one mini template irrespective of the information we need to present for dynamic websites.</span></div><h2 style="text-align: left;"><font color="#b51200">Working on a Project:</font></h2><div><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiwZ18hWnc7sbk7PntflZwk9oOTMRW1cun-FBymRAqQS1UQZ_uJBnhlyNrUT60PDnqf7U_PVZWUO5KSnMj3U4ZAw4Y4q9ibz4GLdnsIT2zNSIW2zm49SBqRF4qnBgfF4YANc5fNNrcUTMY/s1522/dashboard.jpg" style="margin-left: 1em; margin-right: 1em; text-align: center;"><img border="0" data-original-height="704" data-original-width="1522" height="296" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiwZ18hWnc7sbk7PntflZwk9oOTMRW1cun-FBymRAqQS1UQZ_uJBnhlyNrUT60PDnqf7U_PVZWUO5KSnMj3U4ZAw4Y4q9ibz4GLdnsIT2zNSIW2zm49SBqRF4qnBgfF4YANc5fNNrcUTMY/w640-h296/dashboard.jpg" width="640" /></a></div><div><span style="font-size: large;"> </span></div><div><div><font size="4"> </font><span style="font-size: large;">Before developing any project, it is important to analyze the page structure. </span><span style="font-size: large;">To adopt a web page with different device screens, you must familiar with the structure. </span><span style="font-size: large;">When you do the program analysis of the above page, there are 4 visible parts inside the container. The page elements are placed inside the container. The width of the header is designed by the parent DIV. But, the width of the Main, Sidebar, and Footer are fixed. These things need to take care of the media query. To display the main area, and sidebar in the same line, we need to float them. So, this won't generate the height for the parent element. It actually interacts with flow elements defined behind them. </span></div></div><div><span style="font-size: large;"> Another property on the website is the carousel. Carousel is a window that switches the images. The carousel is made up of 3 parts. The image and the two buttons that are displayed on top of the image. </span><span style="font-size: large;">As a programmer, you should identify the invisible part. The parent DIV holding the image with two buttons. So, the program blueprint is,</span></div><div><font size="4"><span> * Create the parent DIV. then, put both the image and the two buttons into it.</span></font></div><div><font size="4"><span> * The size of the parent DIV is the same as the image. So, the vertical center overlaps the image vertical center. Then, set the buttons to the absolute position, and the parent DIV to the relative position. </span></font></div><div><font size="4"><span> * Now, the parent DIV is the reference object to the two buttons. </span></font></div><div><font size="4"><span> * Finally, move two buttons to the vertical center of the parent DIV using TOP 50% and margin-top minus their half-height for final adjustment.</span></font></div><div><font size="4"> If you see the menu structure, you can divide the parent and child div so that you can place the menu items appropriately. For the Registration and Login page. you need to create a complete static web page. It has the header, footer, and main area. So, you need some analysis and check out for how many levels/lines in the main area and figure out how to design our code. </font></div><div><div><br /></div></div><h3 style="text-align: left;"><font color="#b51200">Modal Creation: </font></h3><div><font size="4"> Modal is a half-transparent cover in the center that will usually be a login form or message board. By clicking the rest of the cover, it will disappear. The Login area DIV floated on top of all page elements or above the cover. Therefore, set its position to absolute and give the biggest index value and move it to the center of the browser window(top:50%; left:50%). Then, do the final adjustment of margin-top and margin-left for the correct position.</font></div><h3 style="text-align: left;"><span style="font-size: large;"><font color="#b51200">OTP System:</font></span></h3><div><span style="font-size: large;"> One Time Password system is the concept to prevent spam and unwanted hack on websites or mobile applications. The OTP integrated website secures </span><span style="font-size: large;">individual user </span><span style="font-size: large;">data online. It is an automatically generated numeric or alphanumeric strings that help in authentication of a single transaction or session of a particular user. It always works according to its program. Mostly, it is used on the Registration/Login System of customer panel and other websites for payment transactions. There are 2 types of OTP system. Those are,</span></div></div><div><span style="font-size: large;"> 1. Mobile Based OTP System</span></div><div><span style="font-size: large;"> 2. Email Based OTP System</span></div><div><span style="font-size: large;">For a mobile-based OTP system, you need to buy bulk SMS service from the SMS service provider and this will cost you an amount. But, if you have a budget constraint, you can send OTP via email to your users for authentication of any transaction or session.</span></div>Balamurugan Dhttp://www.blogger.com/profile/13078851936106035408noreply@blogger.com0tag:blogger.com,1999:blog-2943630978943594838.post-12870814279222499992020-05-31T06:19:00.093-07:002020-06-05T23:18:33.062-07:00How to build apps on Atlassian Jira cloud?<div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjI2yXGM8onEBBPXhXaqb7a366qXvCrLOCPrVC1ZlDM1g6PrlvEXoxqKC0mdWHHR1EDkvmncVZeqDeuoV-eEpUbOqDihB5mnoid7CZ9c428NPZQOoSVVEazB0XAcJ5E9zQmpOmaGOFjzwo/" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="600" data-original-width="1200" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjI2yXGM8onEBBPXhXaqb7a366qXvCrLOCPrVC1ZlDM1g6PrlvEXoxqKC0mdWHHR1EDkvmncVZeqDeuoV-eEpUbOqDihB5mnoid7CZ9c428NPZQOoSVVEazB0XAcJ5E9zQmpOmaGOFjzwo/w640-h320/jira2.jpg" width="640" /></a></div><font size="4"><br /></font></div><font size="4"> </font><div><font size="4"> Atlassian Jira is a proprietary web-based product developed by Australian company Atlassian. It provides issue tracking, project management, and bug tracking. Jira helps planning, assigning tasks, monitor end to end progress for projects etc., </font><span style="font-size: large;"> </span><span style="font-size: large;">It is mainly useful for software builders, developers, product managers working on software projects.</span><span style="font-size: large;"> Jira supports Agile methodology that has highly customizable dashboards and intuitive search features like basic and advanced search. The JQL query helps to pull out the complex reports for the progress in your project. There are 3 kinds of products in JIRA for the project requirement. JIRA Core is suitable for business which will focus on Marketing, and HR can track the project and employee on-boarding. JIRA Software is primarily for the development team who are developing the software or apps to track the project. JIRA Service Desk is an IT help desk to issue tracking and service requests. JIRA is available on-demand monthly subscription cloud-based SaaS or deployed on your own server for an upfront license.</span></div><div><font size="4"><br /></font><div><div><font size="4"><font color="#b51200"><b>Agile Scrum Board:</b></font> Scrum will be used more on the project deliverables, deadlines in software development lifecycle like how or when will be done, etc., The kanban is useful to track the tasks that are completed, left for the future etc., Historically, it is a physical board to post-it notes or card to represent the work item. But, Scrum Boards helps to visualize all the work via Sprint. Scrum boards can be customized for the team's or project's unique workflow by adding the epics, assignees, projects, and more. At the end of the Sprint, you can get an overview of the issues that are completed and unfinished issues will be a move to the backlog for the next sprint planning.</font></div><div><font size="4"><br /></font><div><div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEinMbWijHPTgSeckN4MDJ2cvWr7EhF_LArRvOkqPQ5-7pc3I2i3TqCqb_XKYEZOsjXX5BhW10yovfbkk67yCk3BQhoGzkz4Oi70kjI1GqhxYJvoSR0eL9-fYSTDu-TB0DwbirMV1y0VwIE/" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="657" data-original-width="1344" height="312" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEinMbWijHPTgSeckN4MDJ2cvWr7EhF_LArRvOkqPQ5-7pc3I2i3TqCqb_XKYEZOsjXX5BhW10yovfbkk67yCk3BQhoGzkz4Oi70kjI1GqhxYJvoSR0eL9-fYSTDu-TB0DwbirMV1y0VwIE/w640-h312/backlog.jpg" width="640" /></a></div><font size="4"><br /></font><div><font size="4"><b><font color="#b51200">Issue Template:</font></b> Open the issues template by clicking the create issue in the Jira backlog. </font><span style="font-size: large;">When you click three dots in issues that will bring up a create issue card where you can update the title of the issues, and issue types like a story, task, bug, and epic etc.,</span><span style="font-size: large;"> </span><font size="4">First, select the project for an issue and select the issue type. If you want to create any issue, you need to select the particular issue type. Depends on the issue type, the fields in the template gets changed. Because this issue type has been customized or the fields that are displayed for the particular type has been customized. Then, provide the summary, and description of your issue type(eg, story, epic, bugs, tasks etc.,). </font><span style="font-size: large;">A technical task is the subpart of a story. The bug is an issue with software and epic is a group of stories. </span><span style="font-size: large;">Labels help to identify the story it belongs or categorized in the complex projects like front-e</span><span style="font-size: large;">nd, back-end, web etc., Story points is the estimate of the amount of work it's going to take to deliver in our breakdown story with acceptance criteria etc.,</span><span style="font-size: large;"> </span></div><div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEih8NHyvxKqrTFC8AegoFyA2SmxNSa5uRJVNvaSourtnUmSOsIu0nRYGsrI1GRZPlNpVgHuEFg9OaFnUdYFBL-TYjsL-UfhcT61j1iLRZp0PvO1KxU8QHRSPA9CJFD-GXmBdYsg-KE9oUM/" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="655" data-original-width="1357" height="308" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEih8NHyvxKqrTFC8AegoFyA2SmxNSa5uRJVNvaSourtnUmSOsIu0nRYGsrI1GRZPlNpVgHuEFg9OaFnUdYFBL-TYjsL-UfhcT61j1iLRZp0PvO1KxU8QHRSPA9CJFD-GXmBdYsg-KE9oUM/w640-h308/spint.jpg" width="640" /></a></div><span style="font-size: large;"><br /></span></div><div><span style="font-size: large;"><br /></span></div><div><font size="4"><b><font color="#b51200">Working on Scrum Project:</font></b> In the Atlassian Scrum project, you can see the backlog, active sprints which will have 3 columns like todo, in progress and done. You can add the columns for testing, review, and functionality for the new workflow. Start creating the backlog with some user stories that allow the things to do. For ex, Create an app that allows the user to comment on our trip which are assigned to someone. So, start creating as many user stories that you want in your project and order with the highest priority tasks. </font><font size="4">The development team estimate the two weeks of sprint a</font><font size="4">nd picked them up the user stories. Y</font><font size="4">ou are ready to start the sprint and update the sprint name, duration, and sprint goal. In the Active Sprint area, you can see the backlog items and can drag it to in-progress. Now, you can ask another user to work on the story card and they will get the notification. </font><span style="font-size: large;">Open the User management in the settings, it will open a new window to add other people's email addresses. The user will get the invitation and get to work on the backlog of our project. Typically, you can assign the user stories during the sprint planning by showing the functionality is useful. </span><font size="4">When the story gets complete, you can simply move it to done. Basically, this process defines your velocity and how many points you're doing it in a sprint. At the end of the sprint duration, you need to check all the stories moved to done. Any incomplete sprint will be moved back to the backlog for the next sprint. The sprint report helps the idea about the velocity and the story points that are competed. You can take more user stories if you will be able to deliver more points.</font><span style="font-size: large;"> Epics are logical grouping of stories(Grouped by color in Jira) that they vary on your project. Once you mark as done, it will disappear from the epic. Epics are useful to see the progress towards the objective of your project.</span></div></div><div><span style="font-size: large;"><br /></span></div><div><span style="font-size: large;"><b><font color="#b51200">Acceptance Criteria:</font></b> Acceptance Criteria give clear indicat</span><span style="font-size: large;">ions of the things you want to do, the way it is going to behave, and the things you don't want to do. The standard format of the acceptance Criteria is, Given, When and Then. For ex,</span></div><div><span style="font-size: large;">User Story: Having the photo feed to upload photos</span></div><div><span style="font-size: large;">* Given : I am using the application</span></div><div><span style="font-size: large;">* When : When I enter into the home screen</span></div><div><span style="font-size: large;">* Then : I need to upload the photos</span></div><div><span style="font-size: large;">These acceptance criteria will be used on the scenario when you are not on the home screen and fix the bug of the photos that are not displayed etc., You can link other user stories by the link options in the issue template. Prioritizing of backlogs depends on customer needs, feedbacks from the MVP, difficulty, and urgency etc., It just get it done and delivers value to the customer.</span></div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjqrd0QCQ7Tf1DT9UQezqQ7NQyUUzEzL3hbE08L0s6gdIiAe3jp6GNB6m3xv7wWW4-u02sf8tH_sMsMQG-tBltKif0c5C6vVSp3msU2cBNubVmYBzMmRyn0XNLxjjIzlIj-p4HPQszaL4Q/" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="646" data-original-width="1357" height="304" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjqrd0QCQ7Tf1DT9UQezqQ7NQyUUzEzL3hbE08L0s6gdIiAe3jp6GNB6m3xv7wWW4-u02sf8tH_sMsMQG-tBltKif0c5C6vVSp3msU2cBNubVmYBzMmRyn0XNLxjjIzlIj-p4HPQszaL4Q/w640-h304/sprint2.jpg" width="640" /></a></div><div><br /></div><div><span style="font-size: large;"><b><font color="#b51200">Agile Workflow:</font></b> Workflow is an underlying sort of flow followed by each of the issues. Jira is a collection of issues. How these issues are transitioning from one state to another is determined by the workflow. There are some default issues and you can create custom issues types and workflows as per the requirement of your project. If you view the workflow, it will detail the certain states, statuses, and transitions. </span></div><div><span style="font-size: large;"> Also, you can change the workflow in Jira by adding the new column in the active sprint. Just open the board setting column by clicking three dots in the upper right-hand corner. In the current layout, you can add the columns for changing your workflow. For mapping the issues, you need to create a new workflow. So, just click the settings and issues, you can see the Jira generic workflow of todo, progress, and done. Create the new workflow by just copy the exiting workflow and add the status of "Open" just as "In Progress" because the story is not complete. This is an inactive workflow and make it as a default workflow for the specific project. For that, you need to go back to the Jira software, select the project settings, and update the workflow to a new workflow with an option to add all issue types to our workflow and publish it. Now, you can see the new column which is ready to map on the board. Once, it is ready, you can move the backlog issues to the sprint in the New Workflow which we defined as,</span></div><div><span style="font-size: large;"> ToDo--> In-Progress-->Review-->Done</span></div><div><span style="font-size: large;"><br /></span></div><div><span style="font-size: large;"><b><font color="#b51200">Design with Jira:</font></b> Designers deal with users, flows, and things that behave in certain contexts. It is important to keep all the specs, and collaborate with your team members will create the next cool things. The development team has its roadmap in Jira Portfolio. The portfolio has the following hierarchy such as,</span></div><div><span style="font-size: large;"> Initiatives > Epics > UserStories</span></div><div><span style="font-size: large;">Initiatives are a group the epics and span many sprints. To plan epics in sprints, you need to break into smaller things. The user story is a piece of work in a single sprint written from the perspective of the end-user. There is a parent-child relationship between the initiatives, epics, and user stories. The epic always belongs to one initiative and the user story always belongs to one epic. By understanding these concepts, we can build our first simple roadmap. Portfolio helps to make a high level of working on stakeholders. To get the portfolio work in the desired order, it is important to understand the prioritization works in the portfolio such as,</span></div><div><span style="font-size: large;"> * Epics (and user stories) should belong to one release.</span></div><div><span style="font-size: large;"> * Plan all the epics in the same order as the releases belong to</span></div><div><span style="font-size: large;"> * Only adjust the ordering of epics and user stories within the same epic and release.</span><span style="font-size: large;"> </span></div><div><span style="font-size: large;"> You can add the Figma files as live embeds in Jira software which will help your team to co-ordinate the deadlines, plan tasks, and ship the product at a much faster rate. The embedded Figma design will automatically update the Jira ticket. With the live embed, everyone building a product will know the current state of the design. In order to embed Figma, just enable the Figma integration in the Atlassian Marketplace for Jira software and paste the URL in the design section of the Jira ticket.</span></div><div><span style="font-size: large;"> </span></div><div><br /></div><div><span style="font-size: large;"><br /></span></div></div></div></div></div>Balamurugan Dhttp://www.blogger.com/profile/13078851936106035408noreply@blogger.com0tag:blogger.com,1999:blog-2943630978943594838.post-36618410540473899162020-05-15T06:17:00.002-07:002020-05-17T18:57:29.117-07:00What are the basics of R programming?<div dir="ltr" style="text-align: left;" trbidi="on">
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiVzRKj1Uyi64NqHoTBGGKyOFUUI3J8AavqVrnhXFMzm-GX3R1BpILCNBDvLSMTZJIDvVLRgLeLMhfHGvZkFj7JBE56HV03Yl3B8ZbBBE-ODhSj075tgdPGkUaAmv78TVB84rEH1KoyybQ/s1600/R-Program.jpg" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="728" data-original-width="1359" height="342" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiVzRKj1Uyi64NqHoTBGGKyOFUUI3J8AavqVrnhXFMzm-GX3R1BpILCNBDvLSMTZJIDvVLRgLeLMhfHGvZkFj7JBE56HV03Yl3B8ZbBBE-ODhSj075tgdPGkUaAmv78TVB84rEH1KoyybQ/s640/R-Program.jpg" width="640" /></a></div>
<br />
R is an open-source high-level programming language and the software environment used for statistical and data analysis purposes. This Domain-Specific language is the best tool for data modeling, graphical representation, and reporting so that the statistician, data miners manipulate and represent data in a compelling way. To start programming, you need to install the R compiler in your system. To install R console, you must go to <a href="https://www.r-project.org/">R-Project</a> and choose the preferred mirror. Then, you can install the second component <a href="https://cran.r-project.org/">R-Studio</a> which we are adding on top of R. R Studio is GUI(Graphical User Interface) which makes the interface more appealing to work with R. The basic commands execute with R are click interface without a code like opening or storing your script, import/export .csv files, package management, and help features. You can install RCmdr by the command install.packages('Rcmdr') which focuses on statistics and graphics. Rstudio has the upper left script section with the lower left has the console section where the actual calculation takes place. The code is sent from the script section to the console. You can directly write the code in the console, but it is difficult to manipulate and correct the code. The upper right window has an environment section where all the objects created in the session will be listed. In the lower right, the files section will help to store or import to the studio. The plots section will open the graph that will be displayed. The package tab will list all the packages installed on your machine. There are 8 packages supplied with R distribution. You can get the user manuals and documentation about each function which will save you plenty of time. The viewer will not likely to be used at the beginner level. So, It is an environment within which statistical techniques are implemented or extended by the packages.<br />
<div class="separator" style="clear: both; text-align: center;">
</div>
<b><span style="color: purple;"><br /></span></b>
<b><span style="color: purple;">Setting up a Script:</span></b> Simply click the + sign in the studio to create a script. With an extension of .R, you can tell that this is our script. We have our script created and start coding means creating the objects. So, start creating the object by "myFirstobject" to be a vector with numbers 5 through 10. There are 3 important things to be observed. First, this code is sent from a script window to the console window where the calculation took place. Second, we can not see the new object in the console. There is no result and no output. Third, the new object has been created in the environment. The computer recognizes an object called myFirstobject. The double dots indicate the series of numbers. If you just type myFirstObject, you will get the output in the console. This is one way to display the data. A more sophisticated way would be in the environment and you can see the simple vector of length six. Also, you can plot those 6 integers against its index which is the position from 1 to 6 in the vector.<br />
<br />
<b><span style="color: purple;">Key Features of R:</span></b> R has tons of functions and object classes already defined and data analytics is quite clear. Our task is to find the proper function and supply the correct specification to the function in the form of arguments. For ex, Plotting the histogram is simple in R by the command 'hist(x)'. So, your responsibility is shifted from building a function from scratch to a prepackaged function by your needs. R is getting more popular and growing to adapt to new developments very fast with high quality. It performs the following tasks,<br />
* Data Entry or through the features of direct data entry<br />
* Data preprocessing like cleaning, changing, deleting or filtering data<br />
* Statistical Analysis include modeling, machine learning, and prediction<br />
* Data Simulations to varying degrees<br />
* Data Visualizations up to the complex graph<br />
* Web Scraping for Twitter Analytics<br />
* Data Visualization framework for website integration<br />
<br />
R is a community-based project and R-Base has many add on packages of nearly 8000 that are contributed by the ever-growing community. It is easy to manage the package at the package tab. The system library came with the installation and it is the base of R. The user libraries are downloaded from the web. It is a two-step process. You need to download it from the repository and activate it. Once you install the required library, it will be available in the user library and just tick the library to activate it. In a general consensus, R is comparable and superior to all proprietary statistics packages.<br />
<br />
<b><span style="color: purple;">Coding with R: </span></b> For coding, you need to understand the objects in R. An object is a collection of data and can be a whole dataset to load in R. It can be the result of the calculation or it can be part of a dataset with specific traits. The objects can have different properties and different classes in R. The object class that you select will determine what you can do or can not do in a dataset. The classic object class is 'data.frame'. It is similar to an excel sheet with columns for variables and rows for observations. The functions in R will work with data frames. For ex, the simplest object class in R is a vector. The vector is the collection of values of the same class. If you put the stock price of 20 days of data in a single excel column is a vector. If you add the date in another column, you would get the time series class or TS. The following are the list of commands used frequently in R,<br />
* ls() command to list all the objects created in the session<br />
* you can remove the object by the command rm("objectname") from the environment<br />
* the predefined seq() function will help you to generate the values in the arguments. For ex, seq(from=3, length=3, by=3) will generate the values as [1] 3 6 9.<br />
* the paste() function will concatenate strings. Anything you feed into the function is turned into a vector of characters. For ex, paste("XYZ", 1:2) will produce XYZ 1 and XYZ 2.<br />
* We can identify the index positions by the observation number in the vector. Let's say an integer vector x = 4:20 which will generate the 17 index positions available in the vector. The which(x ==12) will produce the index position of value 12. The value of the index position can be obtained by the command x[].<br />
<b><span style="color: purple;"> * Functions in R:</span></b> R recognizes the function as objects. So, whenever you create the function, it will appear in the environment. The functions will do some sort of calculations for you. The basic function can be defined as,<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjKnUkLwQkESymAgbclxc9vXMI8MeFCGsykpmRiaqAIipc3LgPmY2IA-mCjz5pO0OUcez-c_dq_1ycx0vpmBoJMi1Bw5psyg_nvwxU-7rqdAH6TKrxAxOgoh8FLNtzxvVKN3W_jHWMx7v0/s1600/function.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="34" data-original-width="489" height="44" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjKnUkLwQkESymAgbclxc9vXMI8MeFCGsykpmRiaqAIipc3LgPmY2IA-mCjz5pO0OUcez-c_dq_1ycx0vpmBoJMi1Bw5psyg_nvwxU-7rqdAH6TKrxAxOgoh8FLNtzxvVKN3W_jHWMx7v0/s640/function.jpg" width="640" /></a><span style="text-align: left;">which will generate the output of 100.</span></div>
<div class="separator" style="clear: both; text-align: center;">
<b style="text-align: left;"><span style="color: purple;"> * Loops in R: </span></b><span style="text-align: left;">Loops allow the operations to be repeated again. R has for and while loops to allow certain operations to repeat a fixed number of times. The syntax of the for loop is "for(name in vector) {commands}".</span></div>
<div class="separator" style="clear: both; text-align: center;">
<b style="text-align: left;"><span style="color: purple;"> * DataSets:</span></b><span style="text-align: left;"> There are various datasets that come with R-Base or add on packages. It allows you to try out the new features of R. You can check out the package "datasets" in R-Studio which will list alphabetical list of data sets available. If you want to use one of the datasets, you need to check out the help section variable including the dimensions. The following commands can be applied in datasets, </span><span style="text-align: left;"> </span></div>
<div class="separator" style="clear: both; text-align: center;">
<span style="text-align: left;">- The head(dataset) and tail(dataset) will give the first and last six rows of observation</span></div>
<div class="separator" style="clear: both; text-align: center;">
<span style="text-align: left;"> - the summary(dataset) will provide basic statistics like Min, Max, Median, mean, and quartile for each variable.</span></div>
<div class="separator" style="clear: both; text-align: justify;">
<span style="text-align: left;"> - the plot(dataset) will help to draw the scatter plots for the dataset variables. The histogram will be useful for one variable in your dataset. The hist(dataset) helps to get an idea about the time-series data. The visual impressions are a valuable source for insight into your data.</span></div>
<div class="separator" style="clear: both; text-align: justify;">
<b style="text-align: left;"><span style="color: purple;"> * Data Frames:</span></b><span style="text-align: left;"> We can see the variables in the data frames by the command "head(dataset)". If you want to extract a single column, you need to use $ sign that way the computer knows about the column X of data from Y. For ex, if you want the sum of a column in mtcars dataset, then you need to specify it as sum(mtcars$wt). But, if you work with a single data frame, you can attach the dataset to your environment so that R can understand the variables belonging to the data frames. The "attach(dataset)" command helps to attach the dataset to the environment. then, just state the variable without $ sign as "sum(wt)" for accessing the variable in the dataset. You can remove the dataset from the environment by the command "detach(dataset)". The specific information about the dataset can be defined as the dataset with the position of the row and the position of variables such as "mtcars[3,6]". From the head command, you can determine the position of the variable. The concatenate tool helps to insert the vector for the rows such as "mtcars[c(2,3,4),6]" which will give the values of the index positions of 2,3, and 4 of the variable number 6.</span></div>
<div class="separator" style="clear: both; text-align: justify;">
<span style="color: purple; font-weight: bold; text-align: left;"><br /></span></div>
<div class="separator" style="clear: both; text-align: justify;">
<span style="color: purple; font-weight: bold; text-align: left;">Data Visualization in R: </span><span style="text-align: left;">R has an extensive collection of functions and add-on packages available. The standard plots like histogram and scatterplot have more than 10 functions and the sorts of arguments that you will use to tweak the plots. Starting R programming with plotting is a viable solution for visually oriented people. There are 3 different systems in producing the graphs in R,</span></div>
<div class="separator" style="clear: both; text-align: justify;">
<span style="text-align: left;"> 1. R-Base</span></div>
<div class="separator" style="clear: both; text-align: justify;">
<span style="text-align: left;"> 2. Lattice</span></div>
<div class="separator" style="clear: both; text-align: justify;">
<span style="text-align: left;"> 3. ggplot2</span></div>
<div class="separator" style="clear: both; text-align: justify;">
<span style="text-align: left;"> R Base is the default way of data visualization. There are several functions for different plot types like plot, hist, barplot, boxplot etc., It gives a quick idea about your data but they are not primarily made for graphs. Lattice is appropriate for scientific publications. The syntax is similar to R-Base, but has different characteristics. The plot matrixes that are putting has several plots on the one-page comparison. ggplot2 is the best project by Hadley Wickham. It is an advanced tool to code and you can create all the available plots with standard visualization.</span></div>
</div>
Balamurugan Dhttp://www.blogger.com/profile/13078851936106035408noreply@blogger.com0tag:blogger.com,1999:blog-2943630978943594838.post-79889817173577184762020-04-30T06:39:00.001-07:002020-05-02T02:57:31.100-07:00What are the fundamentals of Deep Learning and Computer Vision?<div dir="ltr" style="text-align: left;" trbidi="on">
<div class="separator" style="clear: both; text-align: center;">
</div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhOm078kNHpdaSKkdbTO3ZcwO-Jg3m5kpMZrosMboAXQEfJ2vVxEgAACkY6lci4_BGY_ALIoU-iFdokG10BtLa8NrPQEthL8RQ-oGNrfZDP5U6k21hmcqJtS3UF2BzlNxd4QO4hhIaeVag/s1600/supermarket.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="500" data-original-width="1133" height="282" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhOm078kNHpdaSKkdbTO3ZcwO-Jg3m5kpMZrosMboAXQEfJ2vVxEgAACkY6lci4_BGY_ALIoU-iFdokG10BtLa8NrPQEthL8RQ-oGNrfZDP5U6k21hmcqJtS3UF2BzlNxd4QO4hhIaeVag/s640/supermarket.jpg" width="640" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
</div>
<br />
Computer Vision is concerned with the automatic extraction, analysis, and understanding of useful information from a single image or sequence of images. It involves the theoretical and algorithmic basis to achieve automatic visual understanding. Computer Vision will allow deeper and more impactful insights into the businesses in all industries. For ex, Healthcare service providers will be able to quickly and safely diagnose and treat the patients, Manufacturing will have enhanced security and productivity. Computer Vision helps to keep track of the assets and assure the safety of location and employees. Businesses can be improved with the addition of edge computer vision. <br />
The properties and characteristics of the human visual system give inspiration for designing the computer vision system. From the biological point of view, it aims to come with computational models of the human visual system. In engineering, it aims to build an autonomous system to perform some of the tasks which the human visual system can perform and even surpass in many cases. Computer vision is useful in several application areas like smartphone cameras recognize faces and smiles, The factory robots are monitoring the problems and co-workers etc., Computer Vision incorporates the concepts of digital signal processing, neuroscience, AI, computer architecture, and software engineering. Also, it can be studied as a mathematical point of view. The methods in computer vision are based on statistics, optimization, and geometry. In general, the image acquisition devices of computer vision systems capture the visual information as digital signals and hence the need for digital signal processing techniques. Digital Image processing deals with image transformation, compression, restoration, and enhancement. It relies on the image processing technique to preprocess the image data for robust high-level analysis to the application development. Neuroscience plays an important role in image processing. Machine vision applying the range of technologies and methods to provide image-based automatic inspection, process control, and robot guidance in industrial applications.<br />
<br />
<b><span style="color: purple;">Computer Vision Applications:</span></b> CV applications are industrial vision system, robotics and outperforming the human tasks such as circuit board inspection, face recognition, multimedia, medical imaging etc., The new emerging applications are augmented reality, autonomous driving, IOT etc., Computer Vision System has the basic elements such as power source, camera, processor, control and communication cable. Also, it has configuration software and monitor the display system. Vision Processing Unit is an emerging processor to complement the GPU and CPU. In real-world, CV is used in,<br />
* the Visual Surveillance and Drones that will help to keep track of so many events<br />
* the biometric-based applications are fingerprint authentication, face recognition is widely used in various industries to keep track of the employee details<br />
* the navigation system, the stereo vision, and depth-sensor based are used for robot navigation<br />
* the autonomous driving, it has been used for the line detection to keeps the vehicle in designated line. The complete scene understanding is the basic requirement for autonomous driving vehicles.<br />
* the automated supermarkets are powered to keep track of the customer product and cart etc.,<br />
* <a href="https://www.microsoft.com/en-us/ai/seeing-ai">Seeing-AI</a> app is the Microsoft project that helps to turn the visual world into an audible experience. For ex, the app recognizes the saved friends and facial expressions, read the text aloud that comes into view, scan and read the documents of books, letters and recognize the text in formatting, identifies the currency value, the bar code scanner helps to find the product what you want etc.,<br />
<br />
<b><span style="color: purple;">Convolutional Neural Networks(CNN):</span></b> CNN is supervised deep learning system used for computer vision. The computer processes the images the same way our brain functions. Our brain recognizes an image based on features it observes or picks up. For ex, when you look at the object and understand the features of the object, you are able to identify it. You might not be able to identify the object when you never saw the features before or our mind is not to understand the feature. CNN processes the image the same way our brain does. The facial recognition, object classification in the photograph, Facebook name tagging are good utilization of the CNN architecture. You will pass an image as input to CNN architecture and outputs the label or image class. The computer reads the image in a digital form in which every photograph is made of the pixel. Pixels are the smallest unit of information in the picture. It is usually round or square and arranged in a 2-dimensional grid. The process of CNN is divided into 5 steps. Those are,<br />
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg7Abhdi4uGqovAfnxDCawnDcKZ_gmXCx8ykmfPL_S87Pz6jB75khvKOQXb9qxu6qx4d5wfEtVHhqmY3QPMpPdwCm3jSoGa2VW-YxEvakBRWZCDBCi8C_eN6VGjF2flYpfCkYi_EOkOr74/s1600/feature-map.jpg" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" data-original-height="243" data-original-width="596" height="161" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg7Abhdi4uGqovAfnxDCawnDcKZ_gmXCx8ykmfPL_S87Pz6jB75khvKOQXb9qxu6qx4d5wfEtVHhqmY3QPMpPdwCm3jSoGa2VW-YxEvakBRWZCDBCi8C_eN6VGjF2flYpfCkYi_EOkOr74/s400/feature-map.jpg" width="400" /></a> 1. Convolution<br />
2. Feature Map + ReLU Layer<br />
3. Pooling<br />
4. Flattening<br />
5. Full Connection<br />
<br />
At the base of convolution, there is a filter called feature Detector or Filter. The first thing on CNN is Feature Map. It is the smallest size of the actual image created by applying a feature detector to pull important features. Different feature map has different features and combined together to obtain the first convolution layer and CNN architecture can have multiple convolution layers. The Rectified Linear Units(ReLU) function is used to increase the non-linearity in our image. It is combined with convolutions. Pooling helps to remove <br />
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjS4aO1FybWsefwZ5IFVyFM0bsGYqbNA3i_j5AXcVgJolIbsD5yTxJqNHum7uK7MVDO8pK3j0Ql7kVuzGEXoI7Uvq8x-uFeBzBsH2oIRMaJ0K_uCrk16YJ-T19zkdobj0cCy41g6U0XY6Q/s1600/Max-pool.jpg" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" data-original-height="260" data-original-width="538" height="154" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjS4aO1FybWsefwZ5IFVyFM0bsGYqbNA3i_j5AXcVgJolIbsD5yTxJqNHum7uK7MVDO8pK3j0Ql7kVuzGEXoI7Uvq8x-uFeBzBsH2oIRMaJ0K_uCrk16YJ-T19zkdobj0cCy41g6U0XY6Q/s320/Max-pool.jpg" width="320" /></a>the non-necessary pixels and retain the important features. There are various types of pooling like max-pooling, min pooling, average pooling etc., The max-pooling in the respective frame has been done as shown in the picture. We simply take the maximum pixel from each section and put it there. After the pooling, we need to apply flattening on the pooled feature to a flat column. So, this 2*2 column is converted into a single column. Flattening convert the pooled feature map into a single column so that it can go as an input to fully connected(ANN) or other classifiers. Finally, the fully connected layer gives the output which is the class of the image.<br />
<br />
<b><span style="color: purple;">VGG16 Architecture & Transfer Learning: </span></b>VGG16 is a convolutional neural network architecture that was used to win the ILSVR(ImageNet) competition in 2014. VGG16 is a Visual Geometry Group and 16 are the number of layers used by this group for the convolution. It is also called as OxfordNet. The goal of this model is to calculate the probability between 0 and 1for any given image and chose the category with the highest probability. For ex, Suppose you pass the image of the car, the model will calculate the probability score between 0 to 1 and if it is a high probability, then it will categorize as a car. The oxford team made the structure and the weights of the trained network freely available.<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgpT7Nnu1ZImZEwCHuORxHuvJRE0lnxbQRVV1rhUN4gXXlUaPkmgzqC4XSBmJOKtvA38CBrHNo9R3haP3o4fR1hevJjkUKcX-pXxI7A1Pbf7geSRSbgjvV1VT9gLDEXcsHZ1W1Rk0t9MMo/s1600/VGG16.jpg" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="235" data-original-width="844" height="178" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgpT7Nnu1ZImZEwCHuORxHuvJRE0lnxbQRVV1rhUN4gXXlUaPkmgzqC4XSBmJOKtvA38CBrHNo9R3haP3o4fR1hevJjkUKcX-pXxI7A1Pbf7geSRSbgjvV1VT9gLDEXcsHZ1W1Rk0t9MMo/s640/VGG16.jpg" width="640" /></a></div>
<br />
It has 16 layers with learnable weights which has 13 convolution layers and 3 fully connected layers or dense layers. The input passes to various convolution layers, pooling, fully connected layers, and the output will be received. The images are passed through these 16 layers and the category or label comes out over the model from the 1000 categories in imagenet. It is a very successful and accurate model.<br />
Transfer Learning utilizes weights of the trained models instead of training a new model from scratch that you create in the convolution layer, subsampling, pooling, fully connected layer, and get the desired output of the label which will take a long time to classify the image. There are other models available that can be utilized for transfer learning. The model has been trained on a large benchmark dataset to solve a problem similar to a one that we want to solve. The model weighs the important features from the images. So, we use the model and weights that the model has been trained on and transfer those learning to a specific problem. For ex, If we pass the image of the car where the model has trained on the image of the car and the probability of the car is high. When we tweak the model of the damaged portion of the car, where we trained this model to extract the feature and trained in own classifier which will give the output in two or 3 categories. Here, we will take the weights of the pre-trained model and build our own classifier that will identify what we are trying to resolve.<br />
<br />
<b><span style="color: purple;">Model Creation and Deployment: </span></b>It is the coding part in python for various checks of our models. So, we take a piece of code that will take an image as an input and will classify the image. We are going to utilize Keras package. Keras is a wrapper that has been written on top of Tensorflow. Tensorflow is a complex procedure for creating graphs, define layers manually. There is a function to load the image, preprocessor_input to convert the image to a specific format, and converted image given to the VGG16. We go ahead and upload the VGG16 model and utilize the weight in imagenet and save the model as .h5 file. Then, we need to define the function to take an argument of image path which will preprocess the image in a format that can be feed as input to VGG16. We will do the prediction of the pre-processed image and check the shape which is 1 and 1000, it means 1 row and 1000 columns. The model passed through VGG16 is trying to predict the probability of image (Remember, the <a href="http://s3.amazonaws.com/deep-learning-models/image-models/imagenet_class_index.json">imagenet</a> is trained on 1000 images). Finally, We will print the predictions for top 5 values that our model is made.</div>
Balamurugan Dhttp://www.blogger.com/profile/13078851936106035408noreply@blogger.com0tag:blogger.com,1999:blog-2943630978943594838.post-51937961824804076352020-04-15T06:30:00.001-07:002020-04-22T02:07:32.023-07:00How can AI product strategy will rebound from COVID-19 crisis?<div dir="ltr" style="text-align: left;" trbidi="on">
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhkJf9684Sh8o1gZ24y1oIvpSF7egov0bN3TeuZeWOznHbkIRnBTrKeFq3D3THyrW8KDw5qc3u2bT3HU3B89SE8oR_wGQrHfRlmb-20aHucpxY1nCSPpxh9eWpt_SUgF8LHssARV7FA7Mw/s1600/product-ai.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="400" data-original-width="1030" height="248" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhkJf9684Sh8o1gZ24y1oIvpSF7egov0bN3TeuZeWOznHbkIRnBTrKeFq3D3THyrW8KDw5qc3u2bT3HU3B89SE8oR_wGQrHfRlmb-20aHucpxY1nCSPpxh9eWpt_SUgF8LHssARV7FA7Mw/s640/product-ai.jpg" width="640" /></a></div>
<br />
AI Product Management involves AI, Deep Learning, or Machine Learning to improve, create and shape the products. There are lots of AI applications in B2B, B2C products and services like Alexa, Amazon Recommendation, Tesla Autopilot, Netflix, and Machine Learning etc., The data product reflects standard product development of identifying the opportunity to solve the user need, building an initial version and iterate. Also, you may have to work with the complex and iterative nature of AI models and processes. So, make sure that you have the right data for the right purpose and invest in acquiring and maintaining strategic datasets. Think about the stakeholders and his key question of what is in it for them like MVPs, models, narrow the domain, buy/borrow the data. Follow the agile development that involves the users in the development cycle to test, refine and improve the AI features in the product by sharing the feedback with development teams. There are 3 phases in AI or Machine Learning to develop the product. Those are<br />
<span style="color: purple;"><b> 1. Inception:</b></span> In this stage, we are deciding what and why to do on data? Combine the data, analysis, and judgement. The product manager has to discover or create a product that is valuable, usable and feasible.<br />
<b><span style="color: purple;"> 2. Development: </span></b>AI Product Manager keeps all the AI elements together to create a series of MVPs with the models, and buy or borrow the data. So, understand and coordinate with the organization structure.<br />
<b><span style="color: purple;"> 3. Commercialization:</span></b> It is to monitor continuously the performance and improve the product with the right people, processes and tools.<br />
<br />
<span style="color: purple;"><b>Product Strategy:</b></span> It is the integrated plan for how will you meet your objective. It is the roadmap for the product manager to work on the critical elements of product development priorities and finding profitable growth opportunities etc., Product Strategy mess with broader corporate strategy and complement the strategy of your company. The key steps in product strategy are,<br />
* Build the Cross-functional Team<br />
* Review the Company Strategy<br />
* Apply the Market Intelligence<br />
* Make the Product Vision & Objective<br />
* Think, Analyze and Discuss the strategy<br />
* Share with the manager, executives to enhance<br />
* Use your strategy and Iterate over time<br />
The cross-functional team helps for the discussion, critical thinking, dissecting and best done on a group. For ex, you can discuss with other product manager or financial representatives, or engineering manager or client services or account manager and go-ahead to get an executive sponsor. The charter paves the way for what you are doing with your strategy. Charter may be the executive sponsor to say it is a great idea and work on the product strategy with this team and I want to come back in a month and present to our executive leadership. Next, the company strategy is to look at the mission, objectives, and strategy of our company. The product strategy should nest within and complement the corporate strategy. Third, Applying Market Intelligence depends on various factors such as market size and segmentation, customer needs, competitive positioning, technology assessment and regulatory assessment that you are facing in the new regulations. Also, it is to look at the trends and emerging opportunities for long-term strategy work. Product Vision is to find out a better place in the world if we succeed. It should be compelling, ambitious and motivated. For ex, the Wikipedia product manager has compelling and motivating commitments that imagine a world in which every single human being can freely share the sum of all knowledge. Once you got the vision, you need to think about your objectives. This is a specific and measurable goal that you need to set for yourself.<br />
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi9oeEL0hskK6_t0-1774lFo0_Zg4Qa9JeKgNA4XEBBKkuBhfDL98iZwFyxk3dFIRmvv7jbGrSx2W2X6CiojMLx1e2s-TJH9fEhyphenhyphenpQmjLoEG8P2uQ24005L5mPNdav_pI6ppa9uzJNGLyk/s1600/strategy.jpg" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" data-original-height="356" data-original-width="594" height="238" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi9oeEL0hskK6_t0-1774lFo0_Zg4Qa9JeKgNA4XEBBKkuBhfDL98iZwFyxk3dFIRmvv7jbGrSx2W2X6CiojMLx1e2s-TJH9fEhyphenhyphenpQmjLoEG8P2uQ24005L5mPNdav_pI6ppa9uzJNGLyk/s400/strategy.jpg" width="400" /></a> The hard work of strategy development is to get together with the core team. It is to analyze, discuss and push your product with different ideas for market opportunities of size and debating on those are the strategy development. According to prof. Hambrick D Fredrickson, there are 5 elements of strategy as shown in the picture. Arenas are where will be active? The product team where they will be active like the sports team is active in the sports arena. The arena is the market segment, target customers, product, geography, and technology. The vehicles will help you to get your objective. For ex, to enter into a new market, do you have the capabilities to get there or how will you fill the gaps. Will you use the engineering team or partner with another company that has capabilities. The differentiator is about the competitive advantage of how you will meet the needs of the customers. Staging and Pacing are the speed and sequence of moves that are described in Product Maps. Finally, economic logic mainly depends on the money to make at the end of the day.<br />
Once you have done all that work together with your cross-functional team, you are ready to share with your management team about the market background, proposed directions, timeline and actions, and financial projections. In the final step, you are really using the product strategy and iterating over time. It is to drive your product roadmap, product development and growth plans based on the strategy. Then, modify and enhance the strategy based on the market feedback in the long term directions for every year so that you can have some stability of your product. Roadmaps are time-based charts showing the planned evolution of product or service. It helps prioritize development investments, and focus development teams.<br />
<br />
<b><span style="color: purple;">Identifying the Opportunities:</span></b> The best data product opportunities demand the product-and-business perspective with the tech-and-data perspective. Product Managers and business leaders have to identify the unsolved user and business needs. Meanwhile, the data scientist and engineers identifying feasible data-powered solutions on what can be scaled and how. The right data product opportunities are identified and prioritized by,<br />
<b><span style="color: purple;"> - Educating to data scientists about the user and business needs:</span></b> Ensure that part of the data scientist's role is to dig on data directly to understand users and their needs will help.<br />
<b><span style="color: purple;"> - Develop the data-savvy product and business group:</span></b> Individuals across a range of functions are upskilling in data, and employers can accelerate the trend by the learning program. The higher the data literacy of product and business functions will better able to collaborate with tech and data science team<br />
<b><span style="color: purple;"> - Give Data Science at the right place:</span></b> Data science can live at a different place in an organization (eg, centralized or decentralized) and the product and business strategy discussion will accelerate data product development.<br />
Companies often run into trouble with development priorities and struggled to get compelling products to get released with a competitive advantage due to,<br />
<span style="color: purple;"><b> * Constant Changing of Strategy</b></span> - If you think about the product to get really compelling features, compelling products and this will not work your strategy changes of every 3 to 4 months.<br />
<b><span style="color: purple;"> * Death by Numerous Requests</span></b> - B2B companies that grab your products of enterprise clients or you have been in the situation to meet with one of the big clients to represent 7 to 10 percent of your business and talk for 25 requests that they want from you to respond so that they can run their service or doing better. So, you go back to your development team and ask 25 requests. Then, the next big enterprise client for 20 requests followed by 20 things. Now, you got so many requests and completely tie-up with engineering bandwidth and never get to develop a compelling breakthrough product.<br />
<b><span style="color: purple;"> * Overly Long Bureaucratic Planning Cycle </span></b>- Some companies do their market research, talk to the customers, work with the development teams to design and build by the time to get out of the market it's 20 months later and the market has changed.<br />
<br />
<b><span style="color: purple;">Objectives and Key Results(OKR):</span></b> The product vision and strategy has been set at a 5-year path for our products. This OKR converts a big picture direction to this quarter's goals. Objectives are business goals and what we want to accomplish in any given time periods or quarters. One of the quantifiable or<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiNGQAaVW42OMkXq4q7PTYrycCW8fgFuT5577PDGiZpGAXv0GeNpbKUZPfTtK91P4-c7tAkDej0AzQce2Z1nerf2v4Dvq4zFXTVJ6f78h9Y4_7wPDvdouN85NxQYIF_6ZpE393t7J1vTiQ/s1600/OKR.jpg" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" data-original-height="482" data-original-width="712" height="270" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiNGQAaVW42OMkXq4q7PTYrycCW8fgFuT5577PDGiZpGAXv0GeNpbKUZPfTtK91P4-c7tAkDej0AzQce2Z1nerf2v4Dvq4zFXTVJ6f78h9Y4_7wPDvdouN85NxQYIF_6ZpE393t7J1vTiQ/s400/OKR.jpg" width="400" /></a></div>
measurable outcomes linked to this objective is going to use these results later to measure and track our progress against OKR. It empowers the team to set the objective and what our target is and how to measure it the key results. There are 4 steps in OKR,<br />
1. Create 2 to 3 OKR each quarter and include measurable results with a focus on big impact items and the strategic areas of the parts of the product. You might have other elements in your products that you need to get done and fix some bugs to client enhancement requests. So, keep all of these things in the OKR.<br />
2. Share your OKR with the executive team and make sure you got the consensus and headed with the development team.<br />
3. Use OKR to tightly focus on your development efforts.<br />
4. Review the OKR with your team at the end of each quarter to see how you are doing, figure out what to do better next time, move forward from that and grade yourself,</div>
Balamurugan Dhttp://www.blogger.com/profile/13078851936106035408noreply@blogger.com0