<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Shipsy Engineering Blogs]]></title><description><![CDATA[Insights from Shipsy's Developer Community on how to drive innovation through coding and out-of-the-box fixes to your everyday DevOps challenges.]]></description><link>https://engineering.shipsy.io</link><generator>RSS for Node</generator><lastBuildDate>Tue, 21 Apr 2026 22:53:48 GMT</lastBuildDate><atom:link href="https://engineering.shipsy.io/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[Let’s Play! - Building ECS-EC2 Sandbox for Cost-Efficient Testing at Scale]]></title><description><![CDATA[When products and teams are relatively smaller, using a single staging environment for UAT works fine. We can easily sync what specific code is being deployed and easily test the staging environment to ensure everything works as expected. 
However, a...]]></description><link>https://engineering.shipsy.io/building-ecs-ec2-sandbox-for-cost-efficient-testing-at-scale</link><guid isPermaLink="true">https://engineering.shipsy.io/building-ecs-ec2-sandbox-for-cost-efficient-testing-at-scale</guid><category><![CDATA[uat]]></category><category><![CDATA[Sandbox]]></category><category><![CDATA[Testing]]></category><category><![CDATA[ECS]]></category><category><![CDATA[#stagingenvironments]]></category><dc:creator><![CDATA[Arya Bharti]]></dc:creator><pubDate>Fri, 02 Sep 2022 07:28:24 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1662103167204/koZIa8lsT.jpg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>When products and teams are relatively smaller, using a single staging environment for UAT works fine. We can easily sync what specific code is being deployed and easily test the staging environment to ensure everything works as expected. </p>
<p>However, as teams and products scale, managing a single staging environment becomes complex. </p>
<p>Multiple features being tested on a single staging environment might see different behavior in production when one of them goes into production. Finding the root cause of bugs becomes harder and ensuring that new code wouldn't conflict with the current state of the staging environment becomes a daunting task, as shown in the following image:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1662018967300/wGLxSJN4G.jpeg" alt="single_staging.jpeg" /></p>
<p>Hence, it becomes important to create feature-specific disposable testing environments that are cost-efficient, scalable, easily manageable, and robust.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1662018998050/JhHEhxkIQ.jpeg" alt="isolated_staging.jpeg" /></p>
<p>Here is how we, at Shipsy, built an ECS-EC2 Sandbox for UAT testing at scale and unlocked up to 70% of cost savings at the same time!</p>
<h2 id="heading-sandbox-design-and-architecture">Sandbox - Design, and Architecture</h2>
<p>A sandbox refers to a testing environment that isolates the production environment from untested code changes and direct experimentation.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1662019859751/-bQgiZAJX.png" alt="image (25).png" /></p>
<p><a target="_blank" href="https://shipsy.io/">Shipsy</a></p>
<p>The entire cluster, config files, branches, services, etc., are selected from the Sandbox dashboard, and this entire data goes to the backend server once the user clicks on the deploy button.</p>
<p>This data deployment kicks two actions - the entire data gets stored in the database and the backend server triggers a corresponding Jenkins Job that triggers the deployment script over the SSH.</p>
<p>Once the Jenkins job gets executed, it sends the build URL and updated deployment log to the database for visibility over the front end.</p>
<h2 id="heading-configuring-aws-resources">Configuring AWS Resources</h2>
<p>We used the following AWS resources for Sandbox deployment:</p>
<h3 id="heading-amazon-ecs">Amazon ECS</h3>
<p>Amazon ECS is a fast and highly scalable container management service that can be used to run, stop, and manage containers on a cluster. We use it for application deployment and scaling.</p>
<h3 id="heading-ecr">ECR</h3>
<p>It is the storage of docker images on AWS. ECS pulls images from ECR to deploy.</p>
<h3 id="heading-alb-application-load-balancer">ALB (Application Load Balancer)</h3>
<p>We use the application load balancer to distribute incoming traffic among EC2 instances.</p>
<h3 id="heading-security-group">Security Group</h3>
<p>This is used to control incoming and outgoing traffic for our EC2 instances.</p>
<h3 id="heading-dns-andamp-load-balancers">DNS &amp; Load Balancers</h3>
<p>The Domain Name System (DNS) turns domain names into IP addresses, which browsers use to load internet pages.</p>
<p>Load Balancer acts as a reverse proxy and distributes the incoming traffic across multiple targets, such as EC2 instances, containers, and IP addresses.</p>
<p>We have a specific domain name convention that we have optimized for our use case:</p>
<pre><code>https://&lt;sandbox-<span class="hljs-type">name</span>&gt;.&lt;service-<span class="hljs-type">name</span>&gt;.shipsy.io
</code></pre><p>We have also created a wildcard subdomain in AWS Route 53, where our DNS zone is present:</p>
<pre><code>*<span class="hljs-selector-class">.demoprojectxsandbox2</span><span class="hljs-selector-class">.shipsy</span><span class="hljs-selector-class">.io</span>
</code></pre><p>This DNS record is mapped to its corresponding application load balancer DNS name. Then the load balancers forward the request to AWS Target Group using HTTP/HTTPS listener rules. </p>
<p>HTTP/HTTPS listener rules examples:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1662100227036/f9WhTf8c3.png" alt="Screenshot 2022-09-02 at 11.58.40 AM.png" /></p>
<p>Next, we use the Target Group to route requests to one or more registered targets, in our case EC2 instances. This Target Group has all the information such as the IP address of the EC2 instance, port, health check status, etc.</p>
<p>Load balancers have a listener rules limit and we have set the number of load balancers for our demo sandbox according to our use case.</p>
<p>The entire conversation flow goes like this:</p>
<p>DNS record  → Application Load Balancer → Target Group → EC2 instance → Container </p>
<p>Take a look at the following image for a better understanding:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1662100439928/-eHxTT582.png" alt="Screenshot 2022-09-02 at 12.03.27 PM.png" /></p>
<h2 id="heading-deployment-behind-the-scenes">Deployment - Behind the Scenes</h2>
<p>We have already mentioned that the backend server inserts user deployment data into the database and triggers Jenkins job with the parameter - <code>newly created demo deployment log id</code>. </p>
<p>Then, the Jenkins Pipeline executes the deployment script over SSH with the parameter - <code>recently received log id</code>, so that we can fetch necessary user deployment data from the database.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1662100626750/TwA9dTYG9.png" alt="Screenshot 2022-09-02 at 12.05.45 PM.png" /></p>
<p>Next, we discuss the deployment script steps in detail.</p>
<h3 id="heading-1-generate-code-folder">1. Generate Code Folder</h3>
<p>Here, we are generating the code folder in two ways:</p>
<ul>
<li>We clone the repository for an entirely new deployment</li>
<li>We pull the latest custom branch of that repo in case of redeployment</li>
</ul>
<pre><code><span class="hljs-keyword">const</span> codePath = <span class="hljs-string">`<span class="hljs-subst">${branchPath}</span>/code`</span>;
 <span class="hljs-keyword">if</span> (!fs.existsSync(codePath)) {
   fs.mkdirSync(codePath);
   shellChangeDirectory(codePath);

   <span class="hljs-keyword">try</span> {
     <span class="hljs-built_in">console</span>.log(
       <span class="hljs-string">`git clone git@<span class="hljs-subst">${repository}</span>:shipsy/<span class="hljs-subst">${repository}</span>.git <span class="hljs-subst">${codePath}</span>`</span>
     );
     shellExecuteCommand(
       <span class="hljs-string">`git clone git@<span class="hljs-subst">${repository}</span>:shipsy/<span class="hljs-subst">${repository}</span>.git <span class="hljs-subst">${codePath}</span>`</span>
     );
   } <span class="hljs-keyword">catch</span> (e) {
     <span class="hljs-keyword">throw</span> <span class="hljs-string">`Couldn't clone the repository`</span>;
   }

   <span class="hljs-keyword">try</span> {
     shellChangeDirectory(codePath);
     compareProdLatestCommitWithCustomBranch(repository, branch);
     shellExecuteCommand(<span class="hljs-string">`git checkout <span class="hljs-subst">${branch}</span>`</span>);
   } <span class="hljs-keyword">catch</span> (e) {
     <span class="hljs-keyword">if</span> (e === errorCheck.HEAD_COMMIT_NOT_FOUND) {
       <span class="hljs-keyword">try</span> {
         <span class="hljs-built_in">console</span>.log(
           <span class="hljs-string">`Branch not up to date, trying to auto pull <span class="hljs-subst">${prefillDefaultBranch[repository]}</span> branch`</span>
         );
         shellExecuteCommand(
           <span class="hljs-string">`git pull origin <span class="hljs-subst">${prefillDefaultBranch[repository]}</span>`</span>
         );
         shellExecuteCommand(<span class="hljs-string">`git push origin <span class="hljs-subst">${branch}</span>`</span>);
         <span class="hljs-built_in">console</span>.log(<span class="hljs-string">`Auto pull and push succeeded!`</span>);
       } <span class="hljs-keyword">catch</span> (err) {
         <span class="hljs-keyword">try</span> {
           shellExecuteCommand(<span class="hljs-string">`rm -rf <span class="hljs-subst">${branchPath}</span>`</span>);
         } <span class="hljs-keyword">catch</span> (error) {
           <span class="hljs-keyword">throw</span> <span class="hljs-string">`Invalid branch <span class="hljs-subst">${branch}</span>`</span>;
         }
         throwErrorMsgForCommitDiff(repository);
       }
     } <span class="hljs-keyword">else</span> {
       <span class="hljs-keyword">try</span> {
         shellExecuteCommand(<span class="hljs-string">`rm -rf <span class="hljs-subst">${branchPath}</span>`</span>);
       } <span class="hljs-keyword">catch</span> (e) {
         <span class="hljs-keyword">throw</span> <span class="hljs-string">`Invalid branch <span class="hljs-subst">${branch}</span>`</span>;
       }
       <span class="hljs-keyword">throw</span> <span class="hljs-string">`Invalid branch <span class="hljs-subst">${branch}</span>`</span>;
     }
   }
}
</code></pre><h3 id="heading-2-generate-config-files">2. Generate Config Files</h3>
<p>Here, the main challenge was ensuring individual service deployment with its correct config files which point to the deployed URL for deploying the entire cluster. </p>
<p>We overcame this by having the standard naming convention and by figuring out the number of rules needed for any deployment. We keep on looping through all the available sandbox load balancers and check for availability of all the rules in this single load balancer.</p>
<p>Hence, now we know that all the services are to be deployed on the same load balancer and we can easily predict the generated domain name for all services.</p>
<pre><code><span class="hljs-keyword">if</span> (rulesNeeded <span class="hljs-operator">!</span><span class="hljs-operator">=</span><span class="hljs-operator">=</span> <span class="hljs-number">0</span>) {
     <span class="hljs-keyword">for</span> (const [index, arn] of httpsListenerARNs.entries()) {
       <span class="hljs-comment">// taking lock on current alb and increment reserved rules</span>
       const { existingRulesReserved } <span class="hljs-operator">=</span> await updateReservedRules(
         deploymentLogId,
         arn,
         rulesNeeded,
         <span class="hljs-string">"increment"</span>
       );

       const httpsListenerRuleLimitExceeded <span class="hljs-operator">=</span> await isListenerRuleLimitExceeded(
         {
           loadBalancerClient,
           listenerARN: arn,
           rulesNeeded: existingRulesReserved
         }
       );

       console.log({ httpsListenerRuleLimitExceeded });

       <span class="hljs-keyword">if</span> (<span class="hljs-operator">!</span>httpsListenerRuleLimitExceeded) {
         httpsListenerARN <span class="hljs-operator">=</span> arn;
         sandboxEnvIndex <span class="hljs-operator">=</span> index;
         <span class="hljs-keyword">break</span>;
       }

       <span class="hljs-comment">// no need to reserve rules in current alb if limit is already exceeded</span>
       await updateReservedRules(
         deploymentLogId,
         arn,
         rulesNeeded,
         <span class="hljs-string">"decrement"</span>
       );
     }
   }
</code></pre><p><code>generateUrl()</code> Function generates the predicted domain name of the service based on the sandbox environment name and available load balancer index.</p>
<pre><code><span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">updateCourierTrackingConfigDependencyURLs</span>(<span class="hljs-params">{
 sanitizedEnvironmentName,
 configSourcePath,
 fileName,
 sandboxEnvIndex,
 sandboxResourceMap,
 servicesPresent
}</span>) </span>{
 const projectXBaseUrl <span class="hljs-operator">=</span> `https:<span class="hljs-comment">//${generateUrl(</span>
   <span class="hljs-string">"projectx"</span>,
   sanitizedEnvironmentName,
   servicesPresent,
   sandboxResourceMap?.[<span class="hljs-string">"projectx"</span>]?.[<span class="hljs-string">"sandboxEnvIndex"</span>] ?? sandboxEnvIndex
 )}`;
 const ltlBaseUrl <span class="hljs-operator">=</span> `https:<span class="hljs-comment">//${generateUrl(</span>
   <span class="hljs-string">"ltl-backend"</span>,
   sanitizedEnvironmentName,
   servicesPresent,
   sandboxResourceMap?.[<span class="hljs-string">"ltl-backend"</span>]?.[<span class="hljs-string">"sandboxEnvIndex"</span>] ?? sandboxEnvIndex
 )}`;

 const applicationConfig <span class="hljs-operator">=</span> fs.readFileSync(
   `${configSourcePath}<span class="hljs-operator">/</span>${fileName}`,
   <span class="hljs-string">"utf-8"</span>
 );
 let parsedApplicationConfig <span class="hljs-operator">=</span> JSON.parse(applicationConfig);
 parsedApplicationConfig[<span class="hljs-string">"PROJECTX_BASE_URL"</span>] <span class="hljs-operator">=</span> projectXBaseUrl;
 parsedApplicationConfig[<span class="hljs-string">"ltl_base_url"</span>] <span class="hljs-operator">=</span> ltlBaseUrl;

 fs.writeFileSync(
   `${configSourcePath}<span class="hljs-operator">/</span>${fileName}`,
   JSON.stringify(parsedApplicationConfig),
   <span class="hljs-string">"utf-8"</span>
 );
 <span class="hljs-keyword">return</span> JSON.stringify(parsedApplicationConfig, null, <span class="hljs-number">4</span>);
}
</code></pre><p>Here, we faced another challenge: Since we are checking the availability of rule limits at the start of the script. What will happen in case of concurrent sandbox deployments?</p>
<p>We overcame this challenge by storing the data required by the rules in the database for each load balancer and have taken the lock in the database row. Now, the concurrent sandbox deployments cannot take the lock to the same load balancers simultaneously, thereby avoiding the race condition.</p>
<h3 id="heading-3-create-andamp-register-task-definition">3. Create &amp; Register Task Definition</h3>
<p>A Task is a running container and its settings are defined in the Task Definition. This task definition is required to run Docker containers in Amazon ECS.</p>
<pre><code><span class="hljs-selector-tag">function</span> <span class="hljs-selector-tag">getCourierTrackingDefinition</span>({
 <span class="hljs-selector-tag">repository</span>,
 <span class="hljs-selector-tag">sanitizedEnvironmentName</span>,
 <span class="hljs-selector-tag">imageTag</span>
}) {
 <span class="hljs-selector-tag">return</span> {
   <span class="hljs-attribute">containerDefinitions</span>: [
     {
       <span class="hljs-attribute">name</span>: <span class="hljs-built_in">`courier-tracking-${sanitizedEnvironmentName}-container`</span>,
       <span class="hljs-attribute">image</span>: <span class="hljs-built_in">`${ecrURI}/${repository}:${imageTag}`</span>,
       <span class="hljs-attribute">essential</span>: true,
       <span class="hljs-attribute">logConfiguration</span>: {
         <span class="hljs-attribute">logDriver</span>: <span class="hljs-string">"awslogs"</span>,
         <span class="hljs-attribute">options</span>: {
           <span class="hljs-string">"awslogs-group"</span>: <span class="hljs-string">"/ecs/demo-courier-tracking-ec2-task"</span>,
           <span class="hljs-string">"awslogs-region"</span>: <span class="hljs-string">"us-west-2"</span>,
           <span class="hljs-string">"awslogs-stream-prefix"</span>: <span class="hljs-string">"ecs"</span>
         }
       },
       <span class="hljs-attribute">portMappings</span>: [
         {
           <span class="hljs-attribute">hostPort</span>: <span class="hljs-number">0</span>, <span class="hljs-comment">// port will be assigned when task is fired up</span>
           <span class="hljs-attribute">protocol</span>: <span class="hljs-string">"tcp"</span>,
           <span class="hljs-attribute">containerPort</span>: repositoryContainerPort[repository]
         }
       ],
       <span class="hljs-attribute">cpu</span>: <span class="hljs-string">"256"</span>,
       <span class="hljs-attribute">memory</span>: <span class="hljs-string">"512"</span>
     }
   ],
   <span class="hljs-attribute">family</span>: <span class="hljs-built_in">`courier-tracking-${sanitizedEnvironmentName}-ec2-task`</span>,
   <span class="hljs-attribute">requiresCompatibilities</span>: [<span class="hljs-string">"EC2"</span>]
 };
}
</code></pre><h3 id="heading-4-create-target-group">4. Create Target Group</h3>
<p>Next, we created target groups from the script:</p>
<pre><code>const createTargetGroup <span class="hljs-operator">=</span> async ({
 loadBalancerClient,
 repository,
 sanitizedEnvironmentName
}) <span class="hljs-operator">=</span><span class="hljs-operator">&gt;</span> {
 <span class="hljs-keyword">return</span> <span class="hljs-keyword">new</span> Promise((resolve, reject) <span class="hljs-operator">=</span><span class="hljs-operator">&gt;</span> {
   const params <span class="hljs-operator">=</span> {
     Name: `tg<span class="hljs-operator">-</span>${repositoryTargetGroup[repository]}<span class="hljs-operator">-</span>${sanitizedEnvironmentName}`
       .substring(<span class="hljs-number">0</span>, <span class="hljs-number">32</span>)
       .replace(<span class="hljs-operator">/</span><span class="hljs-operator">-</span>\s<span class="hljs-operator">*</span>$<span class="hljs-operator">/</span>, <span class="hljs-string">""</span>), <span class="hljs-comment">// target group can not have more than 32 characters</span>
     Port: repositoryContainerPort[repository],
     Protocol: <span class="hljs-string">"HTTP"</span>,
     VpcId: VPC,
     HealthCheckPath: repositoryHealthCheckMapping[repository]
   };
   console.log(params);
   loadBalancerClient.createTargetGroup(params, (err, data) <span class="hljs-operator">=</span><span class="hljs-operator">&gt;</span> {
     console.log(err, data);
     <span class="hljs-keyword">if</span> (err) {
       reject(err);
     }
     resolve(data.TargetGroups[<span class="hljs-number">0</span>].TargetGroupArn);
   });
 });
};
</code></pre><h3 id="heading-5-add-listener-rules-to-load-balancer">5. Add Listener Rules to Load Balancer</h3>
<p>The listener rules are created dynamically from the script, and an example is shown below:</p>
<pre><code>const addHTTPSRuleToListener <span class="hljs-operator">=</span> async ({
 loadBalancerClient,
 listenerARN,
 priority,
 hostHeader,
 targetGroupARN,
 pathPatterns <span class="hljs-operator">=</span> []
}) <span class="hljs-operator">=</span><span class="hljs-operator">&gt;</span> {
 <span class="hljs-keyword">return</span> <span class="hljs-keyword">new</span> Promise((resolve, reject) <span class="hljs-operator">=</span><span class="hljs-operator">&gt;</span> {
   const params <span class="hljs-operator">=</span> {
     Actions: [
       {
         TargetGroupArn: targetGroupARN,
         Type: <span class="hljs-string">"forward"</span>
       }
     ],
     Conditions: [
       {
         Field: <span class="hljs-string">"host-header"</span>,
         Values: [hostHeader]
       },
       ...pathPatterns
     ],
     ListenerArn: listenerARN,
     Priority: priority
   };
   loadBalancerClient.createRule(params, (err, data) <span class="hljs-operator">=</span><span class="hljs-operator">&gt;</span> {
     <span class="hljs-keyword">if</span> (err) {
       reject(err);
     }
     resolve(data?.Rules[<span class="hljs-number">0</span>].RuleArn);
   });
 });
};
</code></pre><h3 id="heading-6-create-andamp-deploy-service">6. Create &amp; Deploy Service</h3>
<p>We use an Amazon ECS service to run and maintain a specified number of instances of a task definition simultaneously in an Amazon ECS cluster.</p>
<pre><code><span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">getCourierTrackingServiceParams</span>(<span class="hljs-params">{
 cluster,
 taskDefinition,
 repository,
 sanitizedEnvironmentName,
 targetGroupARN
}</span>) </span>{
 <span class="hljs-keyword">return</span> {
   cluster,
   taskDefinition,
   serviceName: `courier<span class="hljs-operator">-</span>tracking<span class="hljs-operator">-</span>${sanitizedEnvironmentName}<span class="hljs-operator">-</span>service`,
   loadBalancers: [
     {
       targetGroupArn: targetGroupARN,
       containerName: `courier<span class="hljs-operator">-</span>tracking<span class="hljs-operator">-</span>${sanitizedEnvironmentName}<span class="hljs-operator">-</span>container`,
       containerPort: repositoryContainerPort[repository]
     }
   ],
   desiredCount: <span class="hljs-number">1</span>,
   role: <span class="hljs-string">"ecsServiceRole"</span>
 };
}
</code></pre><p>Now that we have created a sandbox, we need to make sure that the entire undertaking stays cost-efficient as well.</p>
<h2 id="heading-on-demand-vs-spot-instances-cost-considerations">On-Demand vs Spot Instances: Cost Considerations</h2>
<p>There are two types of instances - on-demand instances (default) and spot instances. The on-demand instances come with no long-term commitment and can be availed anytime you want them. However, this availability comes at a higher price.</p>
<p>For us, daily sandbox deployments sit around 40, of which 10 are completely new deployments. The number of active container instances is 150 and active running branches are 470.</p>
<p>So, on-demand instances amounted to $445.04 per month.</p>
<p>Spot instances, on the other hand, can be availed of via a bidding system. There is a maximum bid price and by paying more than that, you can use the instance if there is free capacity.</p>
<p>Once the free capacity gets exhausted, or the current bid price goes higher than your maximum bid price, your spot instance is terminated. </p>
<p>By using these spot instances, we were able to reduce the monthly billings to $133.72, which meant 70% cost savings.</p>
<p>However, this came with another challenge - no default instance termination warning from Amazon. Whenever a spot bid price exceeds your bid or free capacity gets exhausted and the instance can get terminated even when you are in the middle of deployment. </p>
<p>We addressed this via two-pronged approach:</p>
<ul>
<li>Notification before instance termination</li>
<li>Ensuring instance availability for interruption-free working</li>
</ul>
<h3 id="heading-1-notifications">1. Notifications</h3>
<p>Whenever an Amazon instance terminates, it goes into a Draining state, during which no new tasks can be placed on the instance.
Now, we used the following shell script to set these notifications into action:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1662103093894/MhRIv7bnGE.png" alt="Screenshot 2022-09-02 at 12.44.54 PM.png" /></p>
<p>The command “instance_draining_true” is launched right after getting an instance. Now, as soon as the instance gets in the draining state, the script starts putting the tasks on another EC2 instance to avoid downtime and kicks a buffer period of 2 minutes to ensure no work gets lost.</p>
<h2 id="heading-2-ensuring-instance-availability">2. Ensuring Instance Availability</h2>
<p>To ensure that we have extra EC2 instances available, we make use of Spot Fleet. A Spot Fleet is a collection of multiple EC2 instances of almost similar configurations that we can choose during the ECS cluster configuration, as shown below:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1662102349292/apAb_Tnbo.png" alt="Screenshot 2022-09-02 at 12.35.18 PM.png" /></p>
<p>By doing so, we ensure that EC2 instances are available when one of them goes into a draining state, and the probability of getting the instances of almost similar configuration is also increased.</p>
<p>Further, we have configured a specific threshold value for auto-scaling and auto-downscaling of instances in case the instance capacity utilization increases or decreases. </p>
<h3 id="heading-3-scaling-testing-operations-with-cost-efficiency">3. Scaling Testing Operations with Cost Efficiency</h3>
<p>In an agile organization like Shipsy, operational scaling, especially in testing comes with numerous critical core considerations, of which robust performance and cost efficiency are of paramount importance. With spot instances, we were able to scale our UAT testing in a smart and reliable manner, all the while reducing the testing costs to 90%. </p>
<p>At Shipsy, we have a highly agile and innovative tech community of developers committed to making logistics and supply chain processes better, sharper, and more efficient with code that gets better every day!</p>
<p>If you wish to be a part of Team Shipsy, please visit our <a target="_blank" href="https://shipsy.io/careers/">Careers Page</a>.</p>
<h2 id="heading-acknowledgments-and-contributions">Acknowledgments and Contributions</h2>
<p>As an effort towards consistent learning and skill development, we have regular “Tech-A-Break” sessions at Shipsy where team members exchange notes on specific ideas and topics.</p>
<p>This write-up stems from a recent Tech-A-Break session on demo Sandbox, helmed by Viraj Shah and Semal Sherathia.</p>
]]></content:encoded></item><item><title><![CDATA[How to Automate End-to-End UI Testing With Cypress: A Detailed Step-by-Step Guide]]></title><description><![CDATA[Testing in a rapid release environment can easily become an incrementally expensive task. While a minor change can break the previously authored tests, testing the static components for high-quality releases also calls for repetitive testing of compo...]]></description><link>https://engineering.shipsy.io/how-to-automate-end-to-end-ui-testing-with-cypress</link><guid isPermaLink="true">https://engineering.shipsy.io/how-to-automate-end-to-end-ui-testing-with-cypress</guid><category><![CDATA[Cypress]]></category><category><![CDATA[Testing]]></category><category><![CDATA[Automated Testing]]></category><category><![CDATA[ui testing]]></category><category><![CDATA[front end automation]]></category><dc:creator><![CDATA[Arya Bharti]]></dc:creator><pubDate>Tue, 19 Jul 2022 10:54:40 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1658224729925/ViM6QEN39.jpg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Testing in a rapid release environment can easily become an incrementally expensive task. While a minor change can break the previously authored tests, testing the static components for high-quality releases also calls for repetitive testing of components that were not changed at all.</p>
<p>Hence, test automation!</p>
<p>Automating end-to-end UI testing significantly increases the test coverage rates and facilitates scaling of testing operations by reusing the test scripts created earlier. 
Automated testing makes test execution faster, and more accurate via efficient and well-maintained test scripts.</p>
<p>However, for test automation, the idea is to look for a reliable and intuitive platform that can be used by both the test engineers and developers for extended benefits. 
One such platform is Cypress, which allows testing engineers and developers to create web test automation scripts in JS.</p>
<p>Here is a detailed step-by-step tutorial from us, at Shipsy, for front-end test automation with Cypress. The tutorial starts with system requirements, and Cypress installation and also features code snippets from a test demo for a better understanding.</p>
<p>Let us get started with the basics!</p>
<h2 id="heading-choice-of-platform-why-cypress">Choice of Platform: Why Cypress?</h2>
<p>Cypress is an open-source UI automation tool that can be used for integration and end-to-end testing automation. While other testing automation tools like Selenium sit on the compatibility layer (this layer sits between the browser and the Selenium testing framework), Cypress sits right inside the browser and performs all the tasks there.</p>
<p>This is possible because the test runner of Cypress injects the code into the browser and gets the information in and out of it. This, in turn, is possible because Cypress is built on Node.js and uses JavaScript for writing tests.</p>
<p>Further, the Mocha and Chai libraries are built-in and the platform is fast to set up, install and execute.</p>
<p>Finally, the software automatically takes snapshots as the tests run. The user can hover over each command in the command log to see exactly what happened in each step. They can also check the video playback of every test execution for more visibility into testing.</p>
<p>The same goes for Puppeteer. When it comes to quick testing, Puppeteer is great but if you have to test an entire application (which is generally the case), Cypress emerges as a reliable, robust, and resilient platform that offers impeccable performance and results.</p>
<p>The reasons for the huge difference between the performance lies in the fact that Puppeteer is a library, and Cypress is a solid end-to-end testing platform for writing and automating UI tests. Further, the team has done an amazing job with the documentation.</p>
<h2 id="heading-getting-started-setting-cypress">Getting Started: Setting Cypress</h2>
<p>As Cypress sits in the browser, it allows much more control over DOM in general, because there is not too much “to-and-fro” among layers or drivers. </p>
<p>Now, one might think that the native side of events might suffer because of this, but 
Cypress offers repositories for native events. </p>
<p>Next, we share the step-wise process of setting up Cypress.</p>
<h3 id="heading-1-download-nodejs">1. Download NodeJS</h3>
<p>Install a version of NodeJS that is not older than NodeJS12.</p>
<h3 id="heading-2-setting-cypress">2. Setting Cypress</h3>
<p>It is a good practice to place the files for integration, unit testing, and end-to-end testing in the project folder for which tests are being written.</p>
<p>Now one might ask - “Why?”</p>
<p>As there is no code sharing among the files, Cypress can be kept apart from the main project technically. However, when we are using CI/CD for automatic online deployment, having all the tests in the same repository makes test execution easier and faster.</p>
<h3 id="heading-3-initializing-cypress-test-directory">3. Initializing Cypress Test Directory</h3>
<p>We can initialize “npm project” in two ways:</p>
<ul>
<li>Using the Yarn package manager</li>
<li>Using the <code>npm</code> package manager (a more popular option)</li>
</ul>
<p>For this, we run the following command:</p>
<pre><code>npm init <span class="hljs-operator">-</span><span class="hljs-operator">-</span>
</code></pre><p>This completes the initialization of a package JSON file that manages the tendencies, scripts, and execution of the project that is being initialized via Yarn or npm.</p>
<h3 id="heading-4-install-cypress">4. Install Cypress</h3>
<p>Next, install Cypress and download the Cypress binary:</p>
<pre><code>npm <span class="hljs-keyword">install</span> cypress –<span class="hljs-keyword">save</span>-dev
</code></pre><p>While Cypress has its own full browser that ships with an Electron-based browser (a basic version of Chromium), any other browser can also be used.</p>
<h3 id="heading-5-run-cypress-test-runner">5. Run Cypress Test Runner</h3>
<p>Next, run the command:</p>
<pre><code>npx Cypress <span class="hljs-keyword">open</span>
</code></pre><p>This will open the Cypress test runner that has 5 folders:</p>
<ul>
<li>Fixtures</li>
<li>Integration</li>
<li>Plugins </li>
<li>Support</li>
<li>Video</li>
</ul>
<p>Once the Cypress launchpad is opened, you can start writing scripts. Scripts can be run in headless mode as well via this command:</p>
<pre><code><span class="hljs-attribute">npx</span> Cypress run -headless
</code></pre><p>This mode is required when we are running multiple tests in a CI/CD environment (a display-less environment where GUI is not there). </p>
<p>This is another advantage of using Cypress - headless testing can be automated.
Next, we discuss the various folders in the Cypress test runner.</p>
<p>Cypress comes with a comprehensive testing example file. You can check that out before writing your first test. </p>
<p>Now, that’s everything that you need to know to start using Cypress but for a better understanding, we are going to discuss the Cypress framework in detail and explore the same with a detailed test case in the following sections.</p>
<h2 id="heading-cypress-folders">Cypress Folders</h2>
<h3 id="heading-1-fixtures">1. Fixtures</h3>
<p>The Fixture folder contains the dummy data that we wish to use in testing. This comes in handy when APIs are too expensive to call or APIs forbid access via the public internet.</p>
<p>In such cases, we can move the data from that particular API into a JSON file in the fixtures folder and conveniently get data for testing.</p>
<p>Now, this brings us to another question - “<em>Why store this JSON file in fixtures only, and not store this outside in any other folder?</em>”</p>
<p>We do this because Cypress offers out-of-the-box support for using these files statically and no manual data import is required for the JSON data folder. Hence, data imports become effortless and easier.</p>
<h3 id="heading-2-integrations">2. Integrations</h3>
<p>This folder is extremely important for testing as it contains all the test or spec or specification files. We can create multiple folders in the integrations folder for test file categorization.</p>
<p>These test files contain the test description in the <code>describe</code> method. Every test file should ideally have only one <code>describe</code>, which can have multiple tests in it.</p>
<p>Hence, the <code>describe</code> method can be understood as a suite of tests, and all these tests are individually explained under this method. </p>
<h3 id="heading-3-plugins">3. Plugins</h3>
<p>Plugins are used to extend the functionality of Cypress, such as in cases where we need <code>oAuth</code> from browsers, which is a tricky process.</p>
<h3 id="heading-4-support">4. Support</h3>
<p>This folder allows the user to create reusable code and custom commands. This becomes of crucial importance in the cases of long codes, such as <code>user login</code>.
The code in the support folder can be called again and again as and when required.</p>
<h3 id="heading-5-video">5. Video</h3>
<p>This folder contains the video playbacks of all the test execution. These videos can be played by the user to identify any problem or issue with test runs. </p>
<p>This is important in a CI/CD environment where test logs might not offer an in-depth understanding of the test results. </p>
<p>Next, we share a real-time demonstration of automating testing with Cypress and some results of how this automation can speed up the testing without affecting release quality. </p>
<h2 id="heading-ui-testing-automation-with-cypress-implementation-snippets">UI Testing Automation with Cypress - Implementation Snippets</h2>
<p>The first step is to create a Cypress framework for automating UI testing. This is covered in detail in the next sections.</p>
<h3 id="heading-1-creating-a-page-object-model">1. Creating A Page Object Model</h3>
<p>Every page has specific DOM elements that are required for automation and are written in one place. </p>
<p>Now, for creating a page object model, we have to create a separate JS file for each web page. These JS files have the object elements of these pages that are used in DOM test cases.</p>
<p>In Cypress, this looks like this:</p>
<pre><code><span class="hljs-class"><span class="hljs-keyword">class</span> <span class="hljs-title">page_customer</span> </span>{

    page_user_id() {
        <span class="hljs-keyword">return</span> cy.<span class="hljs-keyword">get</span>(<span class="hljs-string">"#username"</span>)
    }

    page_password() {
        <span class="hljs-keyword">return</span> cy.<span class="hljs-keyword">get</span>(<span class="hljs-string">"#password"</span>)
    } 

    page_login() {
        <span class="hljs-keyword">return</span> cy.<span class="hljs-keyword">get</span>(<span class="hljs-string">'.submit-button'</span>)
    }
</code></pre><p>The page objects are now visible and accessible from the page object folder, and there is no need to access the codebase every time in case we need to change or access these page objects. </p>
<h3 id="heading-2-custom-functions">2. Custom Functions</h3>
<p>In the next step, we have to create custom functions for specific tasks, such as consignment creation that requires the sender and destination details, etc.</p>
<pre><code><span class="hljs-keyword">import</span> page_consignment <span class="hljs-keyword">from</span> <span class="hljs-string">'../../PageObjects/pageObjects_express/page_consignment'</span>
<span class="hljs-keyword">const</span> page_consignment1 = <span class="hljs-keyword">new</span> page_consignment()
<span class="hljs-keyword">var</span> first_mile_trip
<span class="hljs-keyword">const</span> first_mile_file = <span class="hljs-string">'cypress/fixtures/fixtures_express/fixtures_firstmile.json'</span>



<span class="hljs-class"><span class="hljs-keyword">class</span> <span class="hljs-title">custom_consignment</span> </span>{

    custom_opencrm() {
        page_consignment1.page_menu_button().click()
        page_consignment1.page_crm_menu().click({<span class="hljs-attr">force</span>:<span class="hljs-literal">true</span>})
        page_consignment1.page_crm().click({<span class="hljs-attr">force</span>:<span class="hljs-literal">true</span>})
    }
</code></pre><p>So, each JS file now has corresponding custom functions related to specific tasks, and these functions can be called as and when required separately in multiple test cases.</p>
<p>The data for these custom functions comes from fixtures and environment settings, which we are going to explain in the coming sections. Basically, all the data comes from outside the main codebase which makes testing highly scalable and the codebase more secure.</p>
<h3 id="heading-3-cucumber-integration-bdd-feature-integration">3. Cucumber Integration (BDD Feature Integration)</h3>
<p>Cucumber is one of the Gherkin-based tools which supports and helps in Behavior Driven Development (BDD). This tool offers the capability to write our tests in a syntax similar to English. </p>
<p>Feature File is the entry point to the Cucumber tests of your framework. It is a file where you will write your tests or acceptance criteria in the Descriptive Gherkin language (Like English). A feature file can include one or many scenarios that are in the form of <code>Given-When-Then</code> format.</p>
<pre><code>Feature: CREATION <span class="hljs-keyword">OF</span> DOCUMENT <span class="hljs-keyword">TYPE</span> DOMESTIC CONSIGNMENT <span class="hljs-keyword">AND</span> PICK UP REQUEST <span class="hljs-keyword">FROM</span> CUSTOMER PORTAL.

Feature Description
    Creation "Domestic Consignment" <span class="hljs-keyword">and</span> "Pick up Request" <span class="hljs-keyword">from</span> customer portal.
    Complete pick up <span class="hljs-keyword">to</span> "Pick up schedule" <span class="hljs-keyword">by</span> rider app.


        Scenario: Domestic Consignment should be created <span class="hljs-keyword">after</span> enter <span class="hljs-keyword">all</span> mandatory fields <span class="hljs-keyword">like</span> Source Address,Delivery Address,item <span class="hljs-keyword">type</span>, weight, service <span class="hljs-keyword">type</span> etc
            Given Customer Portal Url,Customer <span class="hljs-keyword">User</span> id,Customer <span class="hljs-keyword">Password</span> ,Submit button <span class="hljs-keyword">of</span> Customer Portal
             <span class="hljs-keyword">When</span> Click <span class="hljs-keyword">on</span> Consignment button
              <span class="hljs-keyword">And</span> Click <span class="hljs-keyword">on</span> the Single Consignment button
              <span class="hljs-keyword">And</span> Please <span class="hljs-keyword">select</span> the invoice number
              <span class="hljs-keyword">And</span> Enter the source address
              <span class="hljs-keyword">And</span> <span class="hljs-keyword">select</span> source address <span class="hljs-keyword">from</span> saves address <span class="hljs-keyword">option</span>
              <span class="hljs-keyword">And</span> Enter the delivery address
              <span class="hljs-keyword">And</span> <span class="hljs-keyword">select</span> delivery address <span class="hljs-keyword">from</span> saves address <span class="hljs-keyword">option</span>
              <span class="hljs-keyword">And</span> <span class="hljs-keyword">select</span> <span class="hljs-keyword">option</span> document <span class="hljs-keyword">type</span>
              <span class="hljs-keyword">And</span> Enter the weight
              <span class="hljs-keyword">And</span> <span class="hljs-keyword">Select</span> the service <span class="hljs-keyword">type</span>
              <span class="hljs-keyword">And</span> Click <span class="hljs-keyword">on</span> Upload details button
             <span class="hljs-keyword">Then</span> Consignment should be created <span class="hljs-keyword">with</span> success message
              <span class="hljs-keyword">And</span> Extract the Consignment Number <span class="hljs-keyword">as</span> variable <span class="hljs-keyword">to</span> use <span class="hljs-keyword">in</span> another cases

        Scenario: <span class="hljs-keyword">After</span> <span class="hljs-keyword">Create</span> Consignment, Its should be displayed <span class="hljs-keyword">in</span> "Search" page <span class="hljs-keyword">with</span> status "Soft Data Upload" <span class="hljs-keyword">after</span> run the scheduler API.
            Given API AUTOMATION-API url, headers <span class="hljs-keyword">of</span> scheduler API
             <span class="hljs-keyword">When</span> API AUTOMATION-hit the scheduler API
              <span class="hljs-keyword">And</span>  Enter the consignment reference number <span class="hljs-keyword">in</span> <span class="hljs-keyword">search</span> <span class="hljs-type">text</span> <span class="hljs-type">box</span>
              <span class="hljs-keyword">And</span> click <span class="hljs-keyword">on</span> <span class="hljs-keyword">refresh</span>
             <span class="hljs-keyword">Then</span> consignment number should be displayed <span class="hljs-keyword">in</span> <span class="hljs-keyword">search</span> page
              <span class="hljs-keyword">And</span> Status <span class="hljs-keyword">of</span> consignment should be "Soft Data Upload"
</code></pre><p>Step Definition is a small piece of code with a design pattern attached to it. The cucumber will execute the code when it sees a Gherkin Step mentioned in the feature file.</p>
<pre><code>Given(<span class="hljs-string">'Consignment creation url, negative Length, width, height'</span>, () <span class="hljs-operator">=</span><span class="hljs-operator">&gt;</span> {
    api_consignment_url <span class="hljs-operator">=</span> env_dtdc_data.customer_portal_api_url <span class="hljs-operator">+</span> resources_consignment1.resources_consignment_create()
    cy.log(<span class="hljs-string">" Consignment Creation API URL is : "</span> <span class="hljs-operator">+</span> api_consignment_url ) 
    pieces <span class="hljs-operator">=</span> fixtures_customer.Dimensions[<span class="hljs-number">1</span>]
    cy.log(<span class="hljs-string">"Number of pieces is : "</span> <span class="hljs-operator">+</span> pieces)
    length <span class="hljs-operator">=</span> fixtures_customer.Dimensions[<span class="hljs-number">0</span>]
    cy.log(<span class="hljs-string">"Length is : "</span> <span class="hljs-operator">+</span> length)
    width <span class="hljs-operator">=</span> fixtures_customer.Dimensions[<span class="hljs-number">1</span>]
    cy.log(<span class="hljs-string">"Width is : "</span> <span class="hljs-operator">+</span> width)
    height <span class="hljs-operator">=</span> fixtures_customer.Dimensions[<span class="hljs-number">1</span>]
    cy.log(<span class="hljs-string">"Height is : "</span> <span class="hljs-operator">+</span> height)
    service_type <span class="hljs-operator">=</span> fixtures_customer.service_type[<span class="hljs-number">0</span>]
    cy.log(<span class="hljs-string">"Service type is : "</span> <span class="hljs-operator">+</span> service_type)
    customer_code <span class="hljs-operator">=</span> env_dtdc_data.customer_user_code
    cy.log(<span class="hljs-string">"Customer Code is : "</span> <span class="hljs-operator">+</span> customer_code)
    customer_user_id <span class="hljs-operator">=</span> env_dtdc_data.customer_id
    cy.log(<span class="hljs-string">"Customer User ID is : "</span> <span class="hljs-operator">+</span> customer_user_id )
    customer_access_token <span class="hljs-operator">=</span> env_dtdc_data.consignment_access_token
    cy.log(<span class="hljs-string">"customer_access_token is : "</span> <span class="hljs-operator">+</span> customer_access_token) 

})
</code></pre><h3 id="heading-4-how-to-integrate-cucumber-with-cypress">4. How to integrate Cucumber with Cypress?</h3>
<p>Run the following command:</p>
<pre><code>npm install cypress<span class="hljs-operator">-</span>cucumber<span class="hljs-operator">-</span>preprocessor
</code></pre><p>Once these two packages are installed, we call the cucumber-preprocessor next, in the <code>plug-in&gt;&gt;index.js</code> file:</p>
<pre><code><span class="hljs-built_in">module</span>.<span class="hljs-built_in">exports</span> = <span class="hljs-function"><span class="hljs-params">(<span class="hljs-literal">on</span>, config)</span> =&gt;</span> {
<span class="hljs-regexp">//</span> `<span class="javascript">on</span>` <span class="hljs-keyword">is</span> used to hook into various events Cypress emits
<span class="hljs-regexp">//</span> `<span class="javascript">config</span>` <span class="hljs-keyword">is</span> the resolved Cypress config
}
const cucumber = <span class="hljs-built_in">require</span>(<span class="hljs-string">'cypress-cucumber-preprocessor'</span>).<span class="hljs-keyword">default</span>
<span class="hljs-built_in">module</span>.<span class="hljs-built_in">exports</span> = <span class="hljs-function"><span class="hljs-params">(<span class="hljs-literal">on</span>, config)</span> =&gt;</span> {
<span class="hljs-literal">on</span>(<span class="hljs-string">'file:preprocessor'</span>, cucumber())
}
</code></pre><p>Next, we bind the step definitions by adding the below command in the <code>packages.json</code> file:</p>
<pre><code><span class="hljs-attr">"cypress-cucumber-preprocessor":</span> {
<span class="hljs-attr">"nonGlobalStepDefinitions":</span> <span class="hljs-literal">true</span>
}
</code></pre><p>The next set of configuration changes are in the <code>cypress.json</code> file:</p>
<pre><code>{
<span class="hljs-attr">"testFiles"</span>: <span class="hljs-string">"**/*.{feature,features}"</span>
}
</code></pre><h3 id="heading-5-how-to-write-cucumber-bdd-tests-in-the-cypress-framework">5. How to Write Cucumber BDD Tests in the Cypress Framework?</h3>
<p>Feature file is created with <code>.feature</code> extension under the integration folder. </p>
<p>Let’s create a feature file with a scenario in it. </p>
<pre><code><span class="hljs-keyword">Create</span> a feature <span class="hljs-keyword">file</span> named “cucumber.feature” :
Feature: Customer Portal login
Feature Description
Scenario: <span class="hljs-keyword">Log</span> <span class="hljs-keyword">in</span> the customer portal
Given <span class="hljs-keyword">Url</span>, <span class="hljs-keyword">user</span> <span class="hljs-keyword">id</span> <span class="hljs-keyword">and</span> <span class="hljs-keyword">password</span> <span class="hljs-keyword">of</span> the customer portal.
<span class="hljs-keyword">When</span> click <span class="hljs-keyword">on</span> login <span class="hljs-keyword">after</span> enter <span class="hljs-keyword">all</span> above details.
<span class="hljs-keyword">Then</span> Customer portal should be opened successfully.
</code></pre><p>Next, we create another folder under integration with the same name of the feature file and add one js file to write step definitions here:</p>
<p>We need to import <code>{Given, When Then}</code> from cypress-cucumber-preprocessor/steps package :</p>
<pre><code><span class="hljs-keyword">import</span> {Given, <span class="hljs-keyword">When</span>, <span class="hljs-keyword">Then</span>} <span class="hljs-keyword">from</span> "cypress-cucumber-preprocessor/steps"
</code></pre><p>In the above example, we have an integration feature file with different projects. There are different feature files for each project as well and these feature files have different test scenarios written in basic English.</p>
<h3 id="heading-6-step-definitions-for-cucumber">6. Step Definitions for Cucumber</h3>
<p>The code for these features is written in the step definitions folder that has step.express.js files for each step.</p>
<pre><code>Given(<span class="hljs-string">'Consignment creation page and required details.'</span>, () <span class="hljs-operator">=</span><span class="hljs-operator">&gt;</span> {

     customer <span class="hljs-operator">=</span> Cypress.env(<span class="hljs-string">"express_customer_code"</span>)
     cy.log(<span class="hljs-string">"customer code is : "</span> <span class="hljs-operator">+</span> customer)
     custom_consignment1.custom_opencrm()
     custom_crm1.custom_add_consignment()
     destination_name <span class="hljs-operator">=</span> Cypress.env(<span class="hljs-string">"express_destination_details_name"</span>)
     destination_mobile <span class="hljs-operator">=</span> Cypress.env(<span class="hljs-string">"express_destination_details_mobile"</span>)
     destination_address <span class="hljs-operator">=</span> Cypress.env(<span class="hljs-string">"express_destination_address"</span>)
     destination_city <span class="hljs-operator">=</span> Cypress.env(<span class="hljs-string">"express_destination_city"</span>)
     destination_state <span class="hljs-operator">=</span> Cypress.env(<span class="hljs-string">"express_destination_state"</span>)
     destination_pincode <span class="hljs-operator">=</span> Cypress.env(<span class="hljs-string">"express_destination_pin_code"</span>)
     destination_country <span class="hljs-operator">=</span> Cypress.env(<span class="hljs-string">"express_destination_country"</span>)
     sender_name <span class="hljs-operator">=</span> Cypress.env(<span class="hljs-string">'express_origin_details_name'</span>)
     sender_phone <span class="hljs-operator">=</span> Cypress.env(<span class="hljs-string">'express_origin_details_mobile'</span>)
     sender_city <span class="hljs-operator">=</span> Cypress.env(<span class="hljs-string">'express_origin_city'</span>)
     sender_state <span class="hljs-operator">=</span> Cypress.env(<span class="hljs-string">'express_origin_state'</span>)
     sender_pincode <span class="hljs-operator">=</span> Cypress.env(<span class="hljs-string">'express_sender-pin_code'</span>)
     sender_country <span class="hljs-operator">=</span> Cypress.env(<span class="hljs-string">'express_origin_country'</span>)


})

When(<span class="hljs-string">'Enter all the values and press submit.'</span>, () <span class="hljs-operator">=</span><span class="hljs-operator">&gt;</span> {

     custom_crm1.custom_account(customer)
     custom_crm1.custom_destination_details(destination_name, destination_mobile, destination_address, destination_city, destination_state, destination_pincode, destination_country)
     custom_crm1.custom_sender_details(sender_name, sender_phone, sender_city, sender_state, sender_pincode, sender_country)
     custom_crm1.custom_weight(fixture_crm.weight[<span class="hljs-number">0</span>])
     custom_crm1.custom_service(fixture_crm.service[<span class="hljs-number">0</span>])
     custom_crm1.custom_upload_details()
     cy.wait(<span class="hljs-number">10000</span>)

})
Then(<span class="hljs-string">'Status of consignment should be "Pickup Awaited"'</span>, () <span class="hljs-operator">=</span><span class="hljs-operator">&gt;</span> {
     custom_consignment1.custom_verify_table_status(fixtures_consignment_page.header_value[<span class="hljs-number">0</span>])
})
</code></pre><p>This way, all the changes are made outside the codebase and the data can be accessed or modified without accessing the main application code.</p>
<h3 id="heading-7-fixtures">7. Fixtures</h3>
<p>Cypress provides a directory named <code>Fixtures</code>, which stores various <code>Json</code> files. These <code>JSON</code> files can store the test data, Which can be read by multiple tests. We store test data in the form of key values, which we can access in the test scripts. </p>
<p>Take a quick look at the following snippet for a better understanding:</p>
<pre><code>{
    <span class="hljs-attr">"bag_status"</span>: [<span class="hljs-string">"Created"</span>, <span class="hljs-string">"Sealed"</span>, <span class="hljs-string">"In Transit"</span>, <span class="hljs-string">"Inscan At Hub"</span>, <span class="hljs-string">"Debagged"</span>]
}
</code></pre><p>These values are compared with the values in the step definitions:</p>
<pre><code>Given(<span class="hljs-string">'Bag number and Destination hub'</span>, () <span class="hljs-operator">=</span><span class="hljs-operator">&gt;</span> {
    cy.random(<span class="hljs-string">'BAG'</span>).then((number) <span class="hljs-operator">=</span><span class="hljs-operator">&gt;</span> {
        bag_number <span class="hljs-operator">=</span> number
        cy.log(<span class="hljs-string">"Bag Number is  :"</span> <span class="hljs-operator">+</span> bag_number)
    })
    destination_hub <span class="hljs-operator">=</span> Cypress.env(<span class="hljs-string">'express_destination_hub_code'</span>)
    cy.log(<span class="hljs-string">" Destination Hub is : "</span> <span class="hljs-operator">+</span> destination_hub)
})

When(<span class="hljs-string">'Click on Hub Code, Actions and Create Bag button'</span>, () <span class="hljs-operator">=</span><span class="hljs-operator">&gt;</span> {
    custom_bag1.custom_actions_btn()
    custom_bag1.custom_create_bag_btn()
    cy.wait(<span class="hljs-number">2000</span>)
})
</code></pre><p>If these values match, the test cases are successfully passed, otherwise, the test cases fail.</p>
<h2 id="heading-api-testing-automation">API Testing Automation</h2>
<p>For API testing automation our Cypress framework has four parts:</p>
<h3 id="heading-1-headers">1. Headers</h3>
<p>All the API headers are defined in this folder and the key values are passed via the step definitions folder, as explained above.</p>
<p>Take a quick look at the screenshot for a better understanding:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1658215925073/Z4PE5M2ha.png" alt="image.png" /></p>
<h3 id="heading-2-payloads">2. Payloads</h3>
<p>All the API payloads are defined in the form of payload functions in this folder. </p>
<p>Here is a screenshot of the API payload folder:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1658216092164/ZI5hQFI_h.png" alt="image.png" /></p>
<p>The code snippet for the highlighted folder is also shared below for a better understanding.</p>
<pre><code><span class="hljs-class"><span class="hljs-keyword">class</span> <span class="hljs-title">payloads_rider</span> </span>{

    payloads_rider_login(username, password) {
        <span class="hljs-keyword">var</span> payloads_rider_login =
        {
            <span class="hljs-string">"username"</span>: username,
            <span class="hljs-string">"password"</span>: password
        }
        <span class="hljs-keyword">return</span> payloads_rider_login
    }
}
<span class="hljs-keyword">export</span> <span class="hljs-keyword">default</span> payloads_rider;
</code></pre><h3 id="heading-3-resources">3. Resources</h3>
<p>All the resources attached to the API base URLs are defined in this class.</p>
<p>Here is a screenshot of the Resources folder from the Cypress framework:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1658216228681/1Rm3btTRC.png" alt="image.png" /></p>
<p>The code snippet is as follows:</p>
<pre><code><span class="hljs-class"><span class="hljs-keyword">class</span> <span class="hljs-title">resources_rider</span> </span>{

    resources_rider_login(){
        <span class="hljs-keyword">return</span> <span class="hljs-string">'/api/RiderApp/login'</span>
    }

    resources_consignment_for_prs(){
        <span class="hljs-keyword">return</span> <span class="hljs-string">'/api/RiderApp/consignmentsForPRS?worker_id='</span> 
    }

    resources_prepare_prs(){
        <span class="hljs-keyword">return</span> <span class="hljs-string">'/api/RiderApp/preparePRS'</span>
    }

    resources_arrived_location(){
        <span class="hljs-keyword">return</span> <span class="hljs-string">'/api/RiderApp/arrivedAtLocationForPickup'</span>
    }

    resources_complete_pickup(){
        <span class="hljs-keyword">return</span> <span class="hljs-string">'/api/RiderApp/updatePickupTaskStatus'</span>
    }

}

export default resources_rider
</code></pre><h3 id="heading-4-custom-api-functions">4. Custom-API Functions</h3>
<p>In the custom API folder, all the functions on headers, payloads, resources, etc are written and executed.</p>
<p>This is how the folder looks like:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1658216400730/TwvPjGjVE.png" alt="image.png" /></p>
<p>The code snippet for the specific folder is also shared below:</p>
<pre><code>class custom_api_fn_rider {


    api_rider_login(url, username, password) {
        cy.request({
            method: <span class="hljs-string">'POST'</span>,
            form: <span class="hljs-literal">true</span>,
            url: url,
            body: payloads_rider1.payloads_rider_login(username, password),
            headers: headers_rider1.headers_rider_login()
        }).its(<span class="hljs-string">'body'</span>).then((response) <span class="hljs-operator">=</span><span class="hljs-operator">&gt;</span> {
            response_login <span class="hljs-operator">=</span> response
            authToken <span class="hljs-operator">=</span> response_login.data.access_token.id
            worker_id <span class="hljs-operator">=</span> response_login.data.worker.id
            cy.log(<span class="hljs-string">"token is:"</span> <span class="hljs-operator">+</span> authToken)
            cy.log(<span class="hljs-string">"worker_id is : "</span> <span class="hljs-operator">+</span> worker_id)
            cy.log(response_login)
        })
    }
</code></pre><p>The rest of the process resembles the ones discussed for UI testing automation.</p>
<p>A sample step definition call for API testing automation is shown below:</p>
<pre><code>Given(<span class="hljs-string">'API Automation - URL, UserID and Password of Rider App'</span>,  () <span class="hljs-operator">=</span><span class="hljs-operator">&gt;</span> {
    api_login_url <span class="hljs-operator">=</span> env_dtdc_data .api_rider_url <span class="hljs-operator">+</span> resources_rider1.resources_rider_login()
    api_rider_username <span class="hljs-operator">=</span> env_dtdc_data.api_rider_username
    api_rider_password <span class="hljs-operator">=</span> env_dtdc_data .api_rider_password
})

When(<span class="hljs-string">'API Automation -Enter User ID, password and press Submit'</span>,  () <span class="hljs-operator">=</span><span class="hljs-operator">&gt;</span> {
     api_rider1.api_rider_login(api_login_url, api_rider_username, api_rider_password)
})

When(<span class="hljs-string">'API Automation -write the auth token from api response as json data'</span>,  () <span class="hljs-operator">=</span><span class="hljs-operator">&gt;</span> {
     api_rider1.api_rider_login_data_generate()
})
</code></pre><p>Next, we discuss some other components of the Cypress framework.</p>
<h3 id="heading-1-environment-settings">1. Environment Settings</h3>
<p>We have different types of environments, such as Dev environment, Demo environment, Production environment, etc.</p>
<p>Now, we did the environment settings such that the environment or script data for all these environments are written outside the main codebase so that we don't have to make any changes in the existing codebase in case of future environment data updates.</p>
<p>For this, we have used a separate <code>config</code> file that has multiple <code>JSON</code> files for each environment. The test scripts for each environment get the data from these <code>JSON</code> files for every specific command.</p>
<h3 id="heading-2-common-functions-in-command">2. Common Functions in Command</h3>
<p>The <code>Support</code> folder in the Cypress framework has a <code>command.js file</code> that has all the common functions that can be called anytime and anywhere as per requirements via the step definitions file.</p>
<p>Take a quick look at the screenshot that highlights the folder and file:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1658216635404/OI8uH9fWB.png" alt="image.png" /></p>
<p>The code snippet for this specific file is shared below:</p>
<pre><code>Cypress.Commands.add(<span class="hljs-string">"verify_table_status"</span>, (selector1,selector2,header_name,header_value) <span class="hljs-operator">=</span><span class="hljs-operator">&gt;</span> {
    cy.get(selector1).each(($id, index, $list) <span class="hljs-operator">=</span><span class="hljs-operator">&gt;</span> {
        <span class="hljs-keyword">if</span> ($id.text() <span class="hljs-operator">=</span><span class="hljs-operator">=</span><span class="hljs-operator">=</span> header_name) {
            cy.log($id.text())
            cy.get(selector2).eq(index).then(<span class="hljs-function"><span class="hljs-keyword">function</span> (<span class="hljs-params">$status</span>) </span>{
               const status <span class="hljs-operator">=</span> $status.text()
               cy.log(<span class="hljs-string">"header_value is :"</span> <span class="hljs-operator">+</span>status)
               expect(status).equal(header_value)
           })
        }
        })
})
</code></pre><h3 id="heading-3-reports">3. Reports</h3>
<p>Another interesting feature of our UI testing automation framework is the reporting. 
These reports are generated for test scripts that we run. All these reports are extremely intuitive and offer actionable insights in a user-friendly manner because of our BDT framework in testing automation.</p>
<p>Here are a few snapshots of the reports for a better understanding:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1658217145740/afWE4513P.png" alt="image (24).png" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1658217363879/ChDhno0nj.png" alt="image (23).png" /></p>
<p>These reports show the number of test cases that are passed and failed. It also shows the time taken for a test case to get executed.</p>
<p>So, this is how UI testing automation can be done with Cypress.</p>
<p>Now, give this a try, and let’s get in touch in the comments section to share the experience! </p>
<p>At Shipsy, we have a highly agile and innovative tech community of developers committed to making logistics and supply chain processes better, sharper, and more efficient with code that gets better every day! </p>
<p>If you wish to be a part of Team Shipsy, please visit our <a target="_blank" href="https://shipsy.io/careers/">Careers Page</a>.</p>
<h2 id="heading-acknowledgments-and-contributions">Acknowledgments and Contributions</h2>
<p>As an effort towards consistent learning and skill development, we have regular “<a target="_blank" href="https://engineering.shipsy.io/">Tech-A-Break</a>” sessions at Shipsy where team members exchange notes on specific ideas and topics. </p>
<p>This write-up stems from a recent Tech-A-Break session on front-end UI test automation, helmed by Swatantra Srivastava.</p>
]]></content:encoded></item><item><title><![CDATA[Code-Free Label Design: Arresting Redundancies, “One at a Time”]]></title><description><![CDATA[At Shipsy, we track, deliver, and route millions of consignments across the globe, daily.
Hence, redundancies, in any form and in any business vertical, can come with huge repercussions. 
However, when we are operating at scale, redundancies have the...]]></description><link>https://engineering.shipsy.io/code-free-label-design-arresting-redundancies-one-at-a-time</link><guid isPermaLink="true">https://engineering.shipsy.io/code-free-label-design-arresting-redundancies-one-at-a-time</guid><category><![CDATA[React]]></category><category><![CDATA[Node.js]]></category><category><![CDATA[React Native]]></category><category><![CDATA[postgres]]></category><category><![CDATA[#react-pdf]]></category><dc:creator><![CDATA[Arya Bharti]]></dc:creator><pubDate>Mon, 13 Jun 2022 12:45:54 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1655114214429/J0sz6GIwe.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>At Shipsy, we track, deliver, and route millions of consignments across the globe, daily.</p>
<p>Hence, redundancies, in any form and in any business vertical, can come with huge repercussions. </p>
<p>However, when we are operating at scale, redundancies have the tendency to not only creep in but also evolve gradually over time.</p>
<p>One such redundancy that we encountered was consignment label generation. Every customer had different label design specifications, which is why every label required individual creation and ultimately individual coding.</p>
<p>For example, every field and information present on this consignment label is positioned and rendered via code:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1655117298074/pOs3lJt8m.png" alt="Screenshot 2022-06-13 at 2.14.16 PM.png" /></p>
<p>Source: <a target="_blank" href="https://shipsy.io/">Shipsy</a></p>
<p>This meant that a lot of code was redundant and repetitive. The process lacked resource efficiency and consumed a lot of time and effort as well.</p>
<p>Here is how we overcame this redundancy by making the entire process code-free.</p>
<h2 id="heading-problem-statement-redundant-and-inefficient-label-generation-process">Problem Statement - Redundant and Inefficient Label Generation Process</h2>
<p>Previously, the label generation process included the following 5 steps:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1655118787457/CIzpIsV9z.png" alt="image.png" /></p>
<p>Source: <a target="_blank" href="https://shipsy.io/">Shipsy</a></p>
<p>Previously, every label generation began with a ticket generation process, followed by coding as per the client provided specifications. Next, the label was tested by actually printing on paper to ensure that all the content is appearing the correct way. </p>
<p>Finally, the label entered the UAT (User Acceptance Testing).</p>
<p>Hence, the process was time-consuming, tedious, and involved a lot of repetitive coding.</p>
<h2 id="heading-our-objective">Our Objective</h2>
<p>We wanted to make our label generation entirely code-free, and create a dashboard for label generation so that:</p>
<ul>
<li>Anyone, including our clients, can generate labels directly from the dashboard in an entirely code-free manner</li>
<li>The previously generated labels can be stored for future use with minimum changes</li>
<li>The label generation process becomes more time and resource-efficient</li>
<li>There is no need for QA, UAT, and actual print-testing of the labels</li>
</ul>
<p>Here is the roadmap for our endeavor:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1655120047318/YGZkNGw_Q.png" alt="Screenshot 2022-06-13 at 4.53.02 PM.png" /></p>
<p>Source: <a target="_blank" href="https://shipsy.io/">Shipsy</a></p>
<p>Next, we share a step-wise walkthrough of the process that helped us make our label generation code-free, resource-efficient, and redundancy-free.</p>
<h2 id="heading-solution-code-free-label-generation">Solution: Code-Free Label Generation</h2>
<h3 id="heading-step-1-creating-the-metadata-to-generate-label">Step 1 - Creating the Metadata to Generate Label</h3>
<p>As a label is a grid with an array of rows, the metadata to create a label must have the following:</p>
<ul>
<li>Page properties - page size and page arrangement</li>
<li>Components - rows, cells, text, image, barcode</li>
</ul>
<p>A snapshot of basic metadata is shared below:</p>
<pre><code>{
 <span class="hljs-attr">"padding"</span>: <span class="hljs-string">"0px 0px 0px 0px"</span>,
 <span class="hljs-attr">"orientation"</span>: <span class="hljs-string">"portrait"</span>,
 <span class="hljs-attr">"dimension_code"</span>: <span class="hljs-string">"A6"</span>,
 <span class="hljs-attr">"label_arrangement_code"</span>: <span class="hljs-string">"1x1"</span>,
 <span class="hljs-attr">"label_vertical_spacing"</span>: <span class="hljs-number">0</span>,
 <span class="hljs-attr">"label_horizontal_spacing"</span>: <span class="hljs-number">0</span>,
 <span class="hljs-attr">"rows_metadata"</span>: [
   {
     <span class="hljs-attr">"cells"</span>: [
       {
         <span class="hljs-attr">"type"</span>: <span class="hljs-string">"content"</span>,
         <span class="hljs-attr">"style"</span>: {
           <span class="hljs-attr">"width"</span>: <span class="hljs-string">"100%"</span>,
           <span class="hljs-attr">"padding"</span>: <span class="hljs-string">"0px 0px 0px 0px"</span>
         },
         <span class="hljs-attr">"content"</span>: [
           {
             <span class="hljs-attr">"type"</span>: <span class="hljs-string">"text"</span>,
             <span class="hljs-attr">"field"</span>: <span class="hljs-string">"service_type_id"</span>,
             <span class="hljs-attr">"style"</span>: {
               <span class="hljs-attr">"fontSize"</span>: <span class="hljs-string">"10"</span>
             },
             <span class="hljs-attr">"contentType"</span>: <span class="hljs-string">"dynamic_field_code"</span>,
             <span class="hljs-attr">"dynamic_field_code"</span>: <span class="hljs-string">"service_type_id"</span>
           }
         ],
         <span class="hljs-attr">"widthType"</span>: <span class="hljs-string">"%"</span>,
         <span class="hljs-attr">"contentStyle"</span>: {
           <span class="hljs-attr">"justifyContent"</span>: <span class="hljs-string">"center"</span>
         }
       }
     ],
     <span class="hljs-attr">"style"</span>: {
       <span class="hljs-attr">"height"</span>: <span class="hljs-number">12</span>
     }
   },
   {
     <span class="hljs-attr">"cells"</span>: [
       {
         <span class="hljs-attr">"type"</span>: <span class="hljs-string">"content"</span>,
         <span class="hljs-attr">"style"</span>: {
           <span class="hljs-attr">"width"</span>: <span class="hljs-string">"100%"</span>,
           <span class="hljs-attr">"padding"</span>: <span class="hljs-string">"0px 10px 0px 10px"</span>
         },
         <span class="hljs-attr">"content"</span>: [
           {
             <span class="hljs-attr">"type"</span>: <span class="hljs-string">"barcode"</span>,
             <span class="hljs-attr">"field"</span>: <span class="hljs-string">"reference_number"</span>,
             <span class="hljs-attr">"contentType"</span>: <span class="hljs-string">"static_value"</span>,
             <span class="hljs-attr">"static_value"</span>: <span class="hljs-string">"reference_number"</span>
           }
         ],
         <span class="hljs-attr">"widthType"</span>: <span class="hljs-string">"%"</span>,
         <span class="hljs-attr">"contentStyle"</span>: {
           <span class="hljs-attr">"justifyContent"</span>: <span class="hljs-string">"center"</span>
         }
       }
     ],
     <span class="hljs-attr">"style"</span>: {
       <span class="hljs-attr">"height"</span>: <span class="hljs-number">65</span>,
       <span class="hljs-attr">"borderTop"</span>: <span class="hljs-literal">false</span>
     }
   }
 ]
}
</code></pre><p>Some content, such as company image, logo, barcode, etc. is fetched dynamically from the internal server, while the static content, such as shipment details, address, etc can be filled manually as well, as shown below:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1655120340333/1PUo7r-LZ.png" alt="label1.png" /></p>
<p>Source: <a target="_blank" href="https://shipsy.io/">Shipsy</a></p>
<h3 id="heading-step-2-making-ui-from-metadata">Step 2: Making UI From Metadata</h3>
<p>We used the React-PDF library to render the primitive React components in a PDF.</p>
<p>Hence, the components can be returned as a Doc file, a blog, a series of PDFs, etc.</p>
<p>We wrote our own components on top of this React-PDF library, which can be iterated over our metadata to generate labels in a code-free manner.</p>
<p>This looks like regular React components and has properties that contain consignment data and metadata.</p>
<h3 id="heading-step-3-handling-barcodes">Step 3: Handling Barcodes</h3>
<p>We were already using a Lambda function for barcodes that we have used here as well. </p>
<p>As the barcode is a dynamic consignment property, its generation process cannot be hard-coded. Hence, we have used a Unique Identifier (UUID) map that combines consignment data and metadata.</p>
<p>This way, the barcodes that are rendered from the consignment data are attached to the label at the time of final label rendering.</p>
<p>As the PDF rendering is a synchronous process and barcode generation from the Lambda function is asynchronous, it is important to get the barcode before rendering the PDF via React-PDF.</p>
<h3 id="heading-step-4-bringing-metadata-to-ui">Step 4: Bringing Metadata to UI</h3>
<p>We use form functionality to create a label where everything ranging from page size to padding, and static to dynamic label contents are generated without coding. </p>
<p>The dynamic preview of the label being made is also shown to show the liver rendering of the recent changes or actions, as shown in the following image:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1655120428225/MAJQa95T2.png" alt="label 2.png" /></p>
<p>Source: <a target="_blank" href="https://shipsy.io/">Shipsy</a></p>
<p>Further, the default consignment metadata is available on the sidebar to allow the developers easy access to the code as and when required. </p>
<p>This way, the dynamic content can be easily copy-pasted from the metadata to the form elements, and the label generation becomes easier.</p>
<p>While this process solved the code-free nature of the label generation process, two challenges were still there:</p>
<ul>
<li>It was an infinitely going-on form</li>
<li>As form rendering is a resource-intensive process, re-rendering a long-form could easily render the page unresponsive</li>
</ul>
<p>These challenges were resolved in the next step.</p>
<h3 id="heading-step-5-optimizing-the-form-performance">Step 5: Optimizing the Form Performance</h3>
<p>We optimized form performance by breaking down the form components into different parts and made pure functions for these components.</p>
<p>Thus, the value of the component now remains the same unless the properties are changed. This means that unless the properties of a form component are changed, the values are returned from the cache by React </p>
<p>Memo and the form components are not re-rendered.</p>
<p>Hence, there are no performance issues as well.</p>
<h3 id="heading-step-6-printing-the-label-from-dashboard">Step 6: Printing the Label From Dashboard</h3>
<p>Finally, once the label is ready, it can be saved and published in the dashboard. The print command can be given from the dashboard and any user can easily generate the labels without having to code.</p>
<p>The PDF of the label is generated on the server side and sent to the client as a blog. The client can view and print the label as and when required.</p>
<p>To prevent different people from editing a label concurrently, we have a uniqueness check for versioning.</p>
<p>The versions are stored in the history and can be used to override the changes.</p>
<p>An overall walkthrough of the entire process is shown in the following image:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1655120483522/Ag3721rh0.jpg" alt="Labels-100 (1).jpg" /></p>
<p>Source: <a target="_blank" href="https://shipsy.io/">Shipsy</a></p>
<h2 id="heading-results-benefits-we-gained-via-code-free-label-generation">Results: Benefits We Gained via Code-Free Label Generation</h2>
<p>Arresting such redundancies can optimize the processes and spur a number of benefits. Some of the benefits we unlocked via this endeavor include:</p>
<h3 id="heading-1-speedy-label-generation">1. Speedy Label Generation</h3>
<p>As the process no longer involves redundant and repetitive coding, the label generation has become faster, easier, and hassle-free.</p>
<p>Previously, it took 2 days to generate a single label. Using code-free label generation has reduced the label generation time to 2 hours!</p>
<h3 id="heading-2-faster-onboarding-of-customers">2. Faster Onboarding of Customers</h3>
<p>Fast and efficient label generation has reduced the operations kick-start time by 1.5 days. </p>
<p>The customers can get started with the label generation process without any specific learning involved. The dashboard is extremely intuitive and the UI components are self-explanatory with captions being shown with a hovering cursor movement.</p>
<h3 id="heading-3-save-drafts-for-quicker-updates">3. Save Drafts for Quicker Updates</h3>
<p>The multiple drafts generated by a user can be saved for future use and updates. This reduces the overall time and effort spent in generating labels.</p>
<p>Shipsy is a community of agile developers consistently working towards the improvement of our products, and underlying tech to ensure performance-intensive business deliverables. </p>
<p>We hope this write-up energizes similar efforts across the developer community.</p>
<p>To become a part of our developer community, please visit the <a target="_blank" href="https://shipsy.io/careers/">Careers Page</a>.</p>
<h2 id="heading-acknowledgments-and-contributions">Acknowledgments and Contributions</h2>
<p>As an effort towards consistent learning and skill development, we have regular “Tech-A-Break” sessions at Shipsy where team members exchange notes on specific ideas and topics. This write-up stems from a recent Tech-A-Break session on code-free label generation, helmed by Shivangi Singla and Sumit Gupta.</p>
]]></content:encoded></item><item><title><![CDATA[Monitoring on the Move: A Developers' Guide to Mobile App Stability and Scalability]]></title><description><![CDATA[App stability and scalability are two crucial app performance metrics that determine the overall quality and usability of the app.
While an app might be fairly stable initially, constant updates, data overload, and sudden scaling bring a slew of chan...]]></description><link>https://engineering.shipsy.io/monitoring-on-the-move-a-developers-guide-to-mobile-app-stability-and-scalability</link><guid isPermaLink="true">https://engineering.shipsy.io/monitoring-on-the-move-a-developers-guide-to-mobile-app-stability-and-scalability</guid><category><![CDATA[Mobile apps]]></category><category><![CDATA[mobile app development]]></category><category><![CDATA[app development]]></category><category><![CDATA[Mobile Development]]></category><category><![CDATA[mobile application design]]></category><dc:creator><![CDATA[Arya Bharti]]></dc:creator><pubDate>Mon, 30 May 2022 05:46:49 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1653643575903/-TwgoVbJa.jpg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>App stability and scalability are two crucial app performance metrics that determine the overall quality and usability of the app.</p>
<p>While an app might be fairly stable initially, constant updates, data overload, and sudden scaling bring a slew of changes that derail its stability.</p>
<p>Here, we discuss various tips for mobile app developers to create scalable and stable apps. We also share the efforts we invested in one of our apps to make it 99% stable and highly scalable. </p>
<h3 id="heading-developer-tip-1-identify-and-define-key-events-to-track-and-record-them-separately">Developer Tip #1 - Identify and Define Key Events to Track and Record Them Separately</h3>
<p>Diversify the analytics by tracking and recording all the key events, such as all types of conversion events, like:</p>
<ul>
<li>App browsing</li>
<li>Wishlist tracking</li>
<li>Check out</li>
<li>Cart abandonment</li>
<li>Visitor activity, etc</li>
</ul>
<p>Create different events for different steps in the customer journey, such as conversion, and track them. </p>
<p>This way the upper management can get highly granular business intelligence in the form of actionable insights, such as:</p>
<ul>
<li>Increase or decrease in conversions after a new feature release</li>
<li>Conversion trends for different campaigns</li>
<li>Sales and impressions generated for the similar product with different imagery or promotional campaigns</li>
</ul>
<p>Takeaway:</p>
<p>Creating different events for all the crucial steps in the customer journey facilitates event tracking and app monitoring at a granular level. It makes issue tagging easier and faster.</p>
<h3 id="heading-developer-tip-2-proactive-root-cause-analysis-rca">Developer Tip #2 - Proactive Root Cause Analysis (RCA)</h3>
<p>We started working on the stability and scalability of one of our mobile apps a few years back, and a few of our priorities were:</p>
<ul>
<li>Proactive event tracking and issue resolution </li>
<li>Constant on-the-move monitoring</li>
<li>Repeating the cycle for a glitch-free UI/UX</li>
</ul>
<p>And, our list of challenges included the following:</p>
<ul>
<li>Gathering information about events that led to issues, such as crashes, missed orders, or alerts, from the end-users</li>
<li>Making our mobile app more stable and knowing runtime crashes on production</li>
<li>Hitting a large number of APIs via a Pull-based mechanism</li>
</ul>
<p>Now, these three challenges spawned the following three issues:</p>
<ul>
<li>Inability to locate and identify the root cause of the issue - a wrong entry, any existing bug, process fault, etc.</li>
<li>Inability to track and monitor the app stability in runtime from a developer’s perspective </li>
<li>A massive number of API calls persisted irrespective of the efficiency of the query</li>
</ul>
<p>Now, take a look at the following image for a basic overview of the implications these issues caused:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1653888890587/AEl8pZ8fz.png" alt="Screenshot 2022-05-30 at 11.01.18 AM.png" /></p>
<p>Source: <a target="_blank" href="https://shipsy.io/">Shipsy</a></p>
<p>So, what did we do?</p>
<p>We had a two-pronged approach with Firebase here - Analytics and Crashlytics.</p>
<p>Let us explore them one by one.</p>
<h3 id="heading-analytics-what-how-and-why">Analytics - What, How, and Why?</h3>
<p>We created an event for every user action and recorded every action, such as:</p>
<ul>
<li>Button clicks</li>
<li>Screen navigation</li>
<li>API calls and status</li>
<li>Functional events</li>
</ul>
<p>The entire process is shown below:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1653638067458/3MLyUAKeQ.png" alt="image (13).png" /></p>
<p>Source: <a target="_blank" href="https://shipsy.io/">Shipsy</a></p>
<p>Next, we gathered the event data (analytics data) for an in-depth analysis. This data store recorded events (past events and intra-day events) in a date-wise manner.</p>
<p>Finally, we did a thorough analysis and reported on these data sets to generate actionable insights into the overall user activity and app performance. We have also created multiple dashboards for the analysis purpose to know the status of core functionalities.</p>
<p>Gathering such analytics allowed us to tap into the granular user activities and experiences, such as:</p>
<ul>
<li>User behavior analysis, such as average time spent on a specific tab or page</li>
<li>Analyze the features or updates that increased crashes or app issues</li>
<li>Track and monitor app user journey to figure out the most-used app features</li>
<li>Optimize and improve the app UI by reprioritizing the app screen order</li>
</ul>
<p>So, we leveraged the following 4-step process to improve the UI via analytics:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1653638132675/zCUAHpdTR.jpg" alt="Infographic chart-100.jpg" /></p>
<p>Source: <a target="_blank" href="https://shipsy.io/">Shipsy</a></p>
<h2 id="heading-crashlytics-proactive-tracking-monitoring-and-resolution-of-issues">Crashlytics: Proactive tracking, monitoring, and resolution of issues</h2>
<p>A few years back one of our worst-case scenarios looked like this:</p>
<ul>
<li>5% of 6800 order numbers were lost</li>
<li>We were getting 35000 API calls per minute!</li>
<li>Every end-user was looking for updates and new orders via a “PULL” mechanism, causing more API hits</li>
</ul>
<p>To overcome these challenges, we ensured that every time our mobile app crashed or threw an exception, the stack trace was thrown with the exception.</p>
<p>So, our crash reports now included:</p>
<ul>
<li>Crash versions</li>
<li>Names</li>
<li>Keys </li>
<li>Logs</li>
<li>Event data</li>
</ul>
<p>This way, we no longer required the app users to report the exact sequence of events that led to the crash or any exception.</p>
<p>Also, every crash monitoring was followed by a crash report and a crash fix, and the process was repeated proactively.</p>
<p>We started “pushing down” the latest information about events as alerts and notifications.</p>
<p>This helped us significantly reduce the number of API calls and helped us scale in an efficient and sustainable way.</p>
<p>This proactive RCA and in-depth event analytics increase our app stability to 99% from 68%. It also empowered us with insights that helped us resolve any type of issues in our mobile app even before the client could notice them.</p>
<p>Takeaway:</p>
<p>Proactive Root Cause Analysis can be of real help when it comes to building highly scalable and stable mobile apps.
Using actionable insights from the event records can improve the overall app usability and performance.</p>
<h2 id="heading-let-your-app-tell-you-what-is-wrong">Let Your App Tell You What Is Wrong</h2>
<p>While the technical aptitude of every app user is different, generally, their perspective is way different from a developer. </p>
<p>For example, the app user would say - “I pressed the New Order button and the app didn’t work properly. It did nothing.”</p>
<p>On the other hand, a developer is looking for something like - “I was in the middle of updating a delivery record when I got the New Order message. I tapped on it and the app didn't do anything. I was not able to record the delivery and had to relaunch the app.”</p>
<p>Situations like these can be endless. </p>
<p>Therefore, it is important that every crash event detail is fetched from the most trustworthy source - your app.</p>
<h3 id="heading-developer-tip-3-track-monitor-record-and-analyze-data">Developer Tip #3 - Track, Monitor, Record, and Analyze Data</h3>
<p>Following a consistent and robust app data recording practice pays off in various ways. You always, always have the right event data for debugging, app improvements, and issue resolution.</p>
<p>This makes it easier for the developers to locate and identify the exact cause of the crash event and resolve it properly.</p>
<p>Takeaway: </p>
<p>Making your mobile apps “tell you” what went wrong allows you to make them more stable and resilient.</p>
<h2 id="heading-bridging-the-gap-between-support-and-production">Bridging the Gap Between Support and Production</h2>
<p>Earlier our dashboard system for our Support Team and Production Team was disparate. </p>
<p>However, this led to the disjoint information collection and redundant event data collection. </p>
<p>This is because every support team member would use a different phrase to record the crash information. This burdened the production team to make individual calls to the end-users and gather the information.</p>
<p>We overcame this challenge by bridging the gap between our Support and Production teams. </p>
<p>Now, every event has an event ID, that can be tracked, monitored, and referenced in the future as well.</p>
<h3 id="heading-developer-tip-4-reduce-the-time-spent-in-gathering-app-event-data">Developer Tip #4 - Reduce the Time Spent in Gathering App Event Data</h3>
<p>By doing so you can:</p>
<ul>
<li>Reduce the overall time spent in gathering the event data</li>
<li>Skip the agony of processing average analysis done by support staff</li>
<li>Track the event </li>
<li>Record and monitor it for future references</li>
</ul>
<p>Takeaway:</p>
<p>Reduce the number of steps for event data gathering and standardize the event reporting, monitoring, and tracking process. This makes debugging process more efficient and less redundant.</p>
<h2 id="heading-app-performance-monitoring-an-ongoing-process">App Performance Monitoring: An Ongoing Process</h2>
<p>Creating a stable, robust, and scalable mobile app is a daunting task that requires consistent efforts. This is because the scope, functionalities, and utility of an enterprise mobile app vary over time as more people and different user categories evolve. </p>
<p>We, at Shipsy, believe in the consistent improvement of our products, codebase, and underlying tech to ensure that our products stay relevant, high-performing, and razor-sharp.</p>
<p>To become a part of our developer community, please visit the <a target="_blank" href="https://shipsy.io/careers/">Careers Page</a>.</p>
<h2 id="heading-acknowledgments-and-contributions">Acknowledgments and Contributions</h2>
<p>As an effort towards consistent learning and skill development, we have regular “Tech-A-Break” sessions at Shipsy where team members exchange notes on specific ideas and topics. This write-up stems from a recent Tech-A-Break session on Mobile App stability and scalability, helmed by Pankaj Yadav.</p>
<p>Technical Contributions: Sahil Arora and Kalpesh Kundanani.</p>
]]></content:encoded></item><item><title><![CDATA[DIY Optimization: Exploring Shipsy's Smart Route Optimizer]]></title><description><![CDATA[Let’s talk about Sam, a warehouse manager who is losing his sleep these days, literally and figuratively.
He wakes up sharp at 3 am daily, to plan delivery routes for 50000 consignments from his warehouse. With a team of 60 riders and simple geo-codi...]]></description><link>https://engineering.shipsy.io/diy-optimization-exploring-shipsys-smart-route-optimizer</link><guid isPermaLink="true">https://engineering.shipsy.io/diy-optimization-exploring-shipsys-smart-route-optimizer</guid><category><![CDATA[THW Web Apps]]></category><category><![CDATA[routing]]></category><category><![CDATA[optimization]]></category><category><![CDATA[algorithms]]></category><category><![CDATA[webapps]]></category><dc:creator><![CDATA[Arya Bharti]]></dc:creator><pubDate>Fri, 13 May 2022 06:46:36 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1652424488346/rLI4G_7dD.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Let’s talk about Sam, a warehouse manager who is losing his sleep these days, literally and figuratively.</p>
<p>He wakes up sharp at 3 am daily, to plan delivery routes for 50000 consignments from his warehouse. With a team of 60 riders and simple geo-coding software, he is able to complete the job by 6:30 am. </p>
<p>Even after diligent planning, he is unable to plan efficient trips that save money, time, and fuel. He is unable to find the most cost and time-efficient routes or vehicles for the largest savings. </p>
<p>So, he is planning to wake up at 2 am tomorrow morning. But, he is still unsure whether he will be able to save more money than yesterday, or not.</p>
<p>Route planning in logistics can be challenging for large enterprises. This is because constraints make it hard to optimize the transit of millions of orders per day.</p>
<p>Now, these constraints can be anything:</p>
<ul>
<li>Number of deliveries every vehicle is allowed to make</li>
<li>Number of vehicles serving a specific area</li>
<li>Cost of vehicles, fuel, and fulfillment</li>
<li>Time or any other SLA</li>
</ul>
<p>Optimizing routes is a challenging constraint programming problem. It involves analyzing and optimizing specific business use cases at multiple levels. </p>
<p>Some businesses might focus on quick deliveries, while others might prefer vehicle optimization. </p>
<p>This means unlocking economies at scale requires much more than a simple geo-coding engine or TSP.</p>
<p>Scaling your route planning software involves two things:</p>
<ul>
<li>Solving business-specific constraints at scale</li>
<li>Generating intuitive and optimized trips even when the constraints increase</li>
</ul>
<p>So, as we expanded, we were routing millions of consignments per day, and the standard “one-size-fits-all” approach crumbled down.</p>
<p>We required a DIY Smart Route Planning Optimizer, and here is a walkthrough of how we built one. </p>
<p>What makes this entire endeavor more rewarding is the fact that this DIY worked for both - the developers and the clients. Both the stakeholders can customize and configure the routing software by Shipsy. </p>
<p>They can generate optimized routes by choosing the set of trip constraints from a given list.</p>
<p>So, the routing software allows Sam, the warehouse manager, and all his friends in the industry to generate trips with the greatest profits.</p>
<p>Ultimately, this means that he and his friends can now enjoy a peaceful sleep.</p>
<h2 id="heading-building-a-smart-route-optimizer-for-multiple-constraints">Building a smart route optimizer for multiple constraints</h2>
<p>We aimed at building a smart, efficient, and scalable route optimizer that could:</p>
<ul>
<li>Distribute consignment (orders) among riders and take care of all the constraints</li>
<li>Optimize the route (Cost)</li>
<li>Suggest the best non-overlapping routes (quality and efficiency by plying only one vehicle to a specific area)</li>
<li>Doesn’t require developer involvement for configuration every time (We have to make it easier for Sam, remember?)</li>
</ul>
<p>This optimizer would help us in:</p>
<ul>
<li>Asset allocation</li>
<li>Creating route plans, loading sequences, delivery ETAs</li>
<li>Market vehicle requirements</li>
</ul>
<p>It would consider the existing resources to suggest a relevant optimization strategy. It would consider the asset configuration, cost function, and consignment constraints. </p>
<p>This would help us deliver or transport consignments from a hub to desired locations (delivery location or another hub) while:</p>
<ul>
<li>Optimizing the use of existing resources </li>
<li>Avoiding breaching any vehicle or consignment constraints</li>
</ul>
<p>Add operational costs and distance traveled and we had a Pandora’s Box gawking at us!</p>
<p>At first glance, this might seem like a simple K-means or Travelling Salesman problem.</p>
<p>But, here is the catch!</p>
<p>To extract the global optimum, the only possible way would be to generate all the arrangements of trips possible. Only those arrangements which satisfy all arrangements should then be filtered in. </p>
<p>Finally, we should pick the most optimum of the filtered. </p>
<p>Since we’re trying every permutation of the trip, the time complexity of this would be of O(n!). </p>
<p>To put things into perspective, consider the following scenario:</p>
<p>Consider we have three letters {A, B, and C}. </p>
<p>How many permutations of it are possible? </p>
<p>We can start by fixing A, then choosing between {B, C}. This can be done in two ways: {A, B, C} or {A, C, B}. The same can be extended to B and C. </p>
<p>Wherein the results would be: {A, B, C}, {A, C, B}, {B, A, C}, {B, C, A}, {C, A, B}, {C, B, A}.</p>
<p>Take a look at the following image for a visual idea:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1652421295658/4PtTcgLWk.png" alt="permutation.png" /></p>
<p><a target="_blank" href="https://www.askmattrab.com/notes/689-permutations-and-combinations">Source</a></p>
<p>Thus, the number of permutations =&gt; 6 = 3!  We have established that the time complexity for global optimum is n!. </p>
<p>The growth rate of n! is worse than exponential:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1652421365827/g9K5lG_wi.png" alt="image.png" /></p>
<p><a target="_blank" href="https://miro.medium.com/max/1200/1*5ZLci3SuR0zM_QlZOADv8Q.jpeg">Source</a></p>
<p>We can see n! is a “terrible” time complexity. Thus we needed ways to optimize this process.</p>
<p>So, we started with the strategies we had and began a smart DIY for Shipsy’s Smart Route Optimizer.</p>
<h3 id="heading-what-did-we-do">What did we do</h3>
<p>We had different routing algorithms that fit in different use cases</p>
<ul>
<li>Balanced K-Means and Concorde
  K means algorithm helps distribute the consignments among the workers. 
  Concorde provides the optimal path within the cluster.</li>
<li>Constraint programming solver</li>
<li>Shipsy's Routing Algorithm</li>
</ul>
<p>As we onboarded more clients, the nature of consignments changed, adding more constraints:</p>
<ul>
<li>Consignments’ weight, volume, height, etc., needed to be considered in the planning</li>
<li>Fuel Type - Certain areas in India especially in the NCR region have constraints on the type of fuel you can use. For example, in the national capital, only CNG vehicles can serve particular pin codes.</li>
<li>Delivery, Pickup, and Pickup Delivery in different time windows</li>
<li>Area Partitioning</li>
<li>Vehicle Priority (Self-owned vehicles should get priority)</li>
<li>Order Priority (Certain high-value orders need to be prioritized)</li>
<li>Soft constraints, such as overlapping of delivery routes, clustering, and vehicle use</li>
</ul>
<p>Hence, we needed something better than K-means.</p>
<p>After all, Sam’s friend Ray might be looking for a way to create profitable and intuitive trips with a specific set of vehicles or riders. This is the whole idea behind DIY software.</p>
<p>It assumes the configuration that a specific user finds “the best or relevant”.</p>
<p>So, we opted for solving constraint programming using Google OR-tools.</p>
<h2 id="heading-using-google-or-tools-for-linear-programming">Using Google OR-Tools for Linear Programming</h2>
<p>The basic idea was to define "what a particular solution should look like", instead of defining the exact solution. </p>
<p>We converted constraints into equations of the Linear Optimization problem. Then we solved those equations keeping the constraints in mind.</p>
<p>An example of a Linear Optimization problem is shown below:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1652431627586/cCjgTmVzZ.png" alt="Screenshot 2022-05-13 at 2.15.04 PM.png" /></p>
<p><a target="_blank" href="https://developers.google.com/optimization/lp/lp_example#:~:text=The%20following%20sections%20present%20an%20example%20of%20an%20LP%20problem%20and%20show%20how%20to%20solve%20it.%20Here%27s%20the%20problem%3A">Source</a></p>
<p>In our case, the constraints were:</p>
<ul>
<li>Consignment constraints: Weight, volume, time, nature, etc.</li>
<li>Resource: Fixed cost, vehicle shift, vehicle location, weight capacity, volume capacity, etc.</li>
<li>Cost Function, Speed factor, etc.</li>
</ul>
<p>(Before we proceed further, it is important to mention that this list of constraints is ever-expanding. This is because as we scaled and onboarded more clients, the problems and criteria for determining the “profitability and optimization” of the trips changed vastly.</p>
<p>Currently, we are offering three different types of routing algorithm options. Each one of them has 30 to 40 constraints.)</p>
<p>Now, to suggest a solution, we were:</p>
<ul>
<li>Defining the properties of the solution (optimized routes)</li>
<li>Not defining the steps (which vehicles for which route, etc) to come to a solution</li>
</ul>
<p>For constraint programming, we used Google OR-Tools Routing Solver. It is a fast, memory-efficient, and numerically stable solution.</p>
<p>For example, its solution for the problem shown above looks like this:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1652421880480/MnWd9Zrz3.png" alt="image.png" /></p>
<p><a target="_blank" href="https://developers.google.com/optimization/lp/lp_example#:~:text=The%20constraints%20define%20the%20feasible%20region%2C%20which%20is%20the%20triangle%20shown%20below%2C%20including%20its%20interior.">Source</a></p>
<p>So, using the Routing Solver by Google, we solved the vehicle routing optimization. Now, we could find the best routes for a fleet of vehicles visiting a set of locations.</p>
<p>Usually, "best" means routes with the least total distance or cost. </p>
<p>Look at the following example where “0” denotes hub and other numbers as nodes for vehicles to visit:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1652421938577/YdjXLIr2P.png" alt="image.png" /></p>
<p><a target="_blank" href="https://developers.google.com/optimization/routing/vrp#:~:text=Imagine%20a%20company%20that%20needs%20to%20visit%20its%20customers%20in%20a%20city%20made%20up%20of%20identical%20rectangular%20blocks.%20A%20diagram%20of%20the%20city%20is%20shown%20below%2C%20with%20the%20company%20location%20marked%20in%20black%20and%20the%20locations%20to%20visit%20in%20blue.">Source</a></p>
<p>When we used Google OR-tools without constraints, the trip looked something like that:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1652421987444/zS-3zXUb7.png" alt="image.png" /></p>
<p><a target="_blank" href="https://developers.google.com/optimization/routing/vrp#:~:text=The%20diagram%20below%20shows%20the%20assigned%20routes%2C%20in%20which%20the%20location%20indices%20have%20been%20converted%20to%20the%20corresponding%20x%2Dy%20coordinates.">Source</a></p>
<p>Okay, so we had amazingly intuitive trips with a single constraint. </p>
<p>But this is not the case in the real world, right?</p>
<p>Because, remember Sam’s friend Ray, his manager has been looking for ways to cut down the operational costs.</p>
<p>So, here comes another set of constraints, such as vehicle capacity.</p>
<p>Now, when we added even one more constraint (resources, consignments, and cost functions), the trip became:</p>
<ul>
<li>Non-intuitive</li>
<li>Complex</li>
</ul>
<p>So, with constraints, the Routing Solver presented the following trip:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1652422056243/8j9jQCJSC.png" alt="image.png" /></p>
<p><a target="_blank" href="https://developers.google.com/optimization/routing/cvrp">Source</a></p>
<p>The trip shown above became even worse as we kept on adding constraints. The routes were overlapping, and the trips were no longer intuitive. </p>
<p>So, even though the trips were optimal, they were extremely complex and non-intuitive. </p>
<p>They looked something like this:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1652422167143/SRqBOMtlz.png" alt="image.png" /></p>
<p><a target="_blank" href="https://shipsy.io/">Source</a></p>
<p>Now, not every customer applies all the constraints on every delivery or transit. So, we needed our trips to be intuitive as well as optimal.</p>
<p>Thus, we created Shipsy’s Clustering Algorithm from scratch.</p>
<h3 id="heading-shipsys-clustering-algorithm">Shipsy’s Clustering Algorithm</h3>
<p>Shipsy’s Clustering Algorithm is easy to use and fast. It helps us create intuitive trips with simple constraints. </p>
<p>It first segregates customers into clusters based on the partition they belong to. </p>
<p>Usually, partition_id is set by users to restrict vehicles serving multiple locations and minimize overlaps.</p>
<p>This is followed by collecting all the feasible nodes greedily. The greedy aspect denotes choosing the closest feasible consignment.</p>
<p>Finally, a TSP algorithm is applied to each cluster to get an optimal route per cluster.</p>
<p>So, the trips that initially looked something like this:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1652422233246/W_Fu0Ift9.png" alt="image.png" /></p>
<p><a target="_blank" href="https://shipsy.io/">Source</a></p>
<p>Were now extremely intuitive and looked like this:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1652422320730/-we_NjBfz.png" alt="image.png" /></p>
<p><a target="_blank" href="https://shipsy.io/">Source</a></p>
<p>While we solved the “intuitiveness” challenge, the product required configuration for every client.</p>
<p>This means that every time Sam or Ray or any other person required a change or started using the software, they had to sit with the developer for backend configuration.</p>
<p>This is because every optimization strategy had different optimization parameters as per:</p>
<ul>
<li>Business requirements</li>
<li>Use cases</li>
</ul>
<p>Now, this doesn’t look like a very good DIY; does it?</p>
<p>Thus began our journey to create an extremely scalable and configurable routing software that would:</p>
<ul>
<li>Assume the configuration as per a specific business</li>
<li>Scale efficiently</li>
<li>Require minimum to zero developer involvement for backend configuration</li>
<li>Allow change at any particular period of time</li>
<li>Come with a negligible learning curve</li>
<li>Suggest highly optimized and intuitive trips every time (irrespective of the type and number of constraints)</li>
</ul>
<p>Let us have a quick walkthrough in the next section. </p>
<h3 id="heading-shipsys-diy-routing-playground">Shipsy’s DIY Routing Playground</h3>
<p>Shipsy boasts of a plethora of cutting-edge algorithms for vehicle routing problems. We wanted our clients to get a “feel” of how each algorithm fares in a real-world scenario. </p>
<p>Enter: Routing Playground. </p>
<p>It is one of our hallmark products, and allows users to set up dummy customers, and vehicles, and install their required constraints. </p>
<p>The end user can configure the software and try numerous constraints for creating their version of “the best” trips. </p>
<p>So, they can “play around” with the wide range of options we offer, which makes the name “Routing Playground” apt.</p>
<p>The user can choose any of the routing algorithms they want. The software would generate an optimized routing path within moments. </p>
<p>So, the trips were now optimized and intuitive, as shown below:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1652422493554/Addctngrk.png" alt="image.png" /></p>
<p><a target="_blank" href="https://shipsy.io/">Source</a></p>
<p>They can save the most satisfactory configuration as the default backend engine. This engine keeps on suggesting optimized routes effortlessly and can be configured also. </p>
<p>Also, we were left with another problem - Area Partitioning. This would ensure efficient optimization as no two vehicles served the same area.</p>
<h3 id="heading-solving-area-partitioning">Solving area partitioning</h3>
<p>It is often the case that the user wants vehicles to serve in select locations only. </p>
<p>For instance: A vehicle serving in the Noida region should not go to Delhi, and vice-versa. </p>
<p>It helped us in two ways:</p>
<ul>
<li>Quality of trip: Reduced overlap between vehicle trips.</li>
<li>Vehicle region restriction: Restricting a vehicle to serve a specific region.</li>
</ul>
<p>Our intuition here is to assign a “partition_id” to each vehicle and consignment. </p>
<p>Now, only the matching ids may be used to form a trip. We allow users to manually define a boundary in the map itself. </p>
<p>Thus, consignments that fall under that region would be assigned that partition_id.</p>
<p>Take a look at the following screenshot for a better understanding:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1652422630019/bjBhAdZaf.png" alt="image.png" /></p>
<p><a target="_blank" href="https://shipsy.io/">Source</a></p>
<h2 id="heading-results-benefits-we-unlocked-with-our-diy-route-optimizations">Results: Benefits We Unlocked With Our DIY Route Optimizations</h2>
<h3 id="heading-1-diy-routing-for-every-stakeholder">1. DIY Routing for Every Stakeholder</h3>
<p>We started with Sam, who aimed at creating the most profitable trips. According to Sam, the trips that help him save the most money are the best ones.</p>
<p>However, the logistics and supply chain industry has a lot of Sams, Rays and stakeholders. </p>
<p>So, the definition of “optimized or best” trips varies from person to person.</p>
<p>Creating a DIY routing software allowed our solution to be as versatile as all our customers.</p>
<p>They no longer have to rely on the developers for small configuration changes.</p>
<p>Likewise, at the organizational level, we are able to use the DIY nature of the software for our requirements.</p>
<p>So, Shipsy’s Routing Playground is a DIY-hit with all the stakeholders.</p>
<h3 id="heading-2-ease-of-use-and-speed-of-processing">2. Ease of Use and Speed of Processing</h3>
<p>Google’s Routing Solver is complex to use and comes with a learning curve. Also, we have to define constraints and create equations for each one of them.</p>
<p>Shipsy’s clustering algorithm offers quick planning and creates intuitive-looking trips. It is easy to use</p>
<p>When we used Google’s Routing Solver, the trip planning for 1000 consignments took 10 to 15 minutes. Shipsy’s clustering algorithm can plan trips for 20, 000 to 30, 000 consignments in a few minutes.</p>
<h3 id="heading-3-osrm-for-more-realistic-distance-function">3. OSRM for More Realistic Distance Function</h3>
<p>Our routing optimizer uses OSRM (Open Source Routing Machine). So, every trip suggestion examines the geography between two points, such as dead ends, etc.</p>
<p>The trips appear as polylines on OSM (Open Source Maps) and are highly precise. </p>
<p>They also consider the real-world road conditions for smart decisions.</p>
<h2 id="heading-4-cost-efficiency">4. Cost Efficiency</h2>
<p>Using OSM helped us reduce our dependence on Google Maps which comes with subscription fees. Also, OSRM offers blazing-fast speed because of its C++ backend. </p>
<h3 id="heading-5-versatile-strategizing">5. Versatile Strategizing</h3>
<p>Shipsy’s clustering algorithm offers a large number of options. For example, it offers 100+ consignment parameters and constraints for route planning.</p>
<p>So, strategizing can be as versatile as a client wants. For example, they can opt for Geo-fence partitioning, to map a rider to a specific geo-fence partition for zonal consignments. </p>
<p>At Shipsy, we aim to keep our products, operations, and performance razor-sharp. This ensures that our products scale well and stay relevant to the evolving client needs.</p>
<p>To be a part of our developer community, please visit our <a target="_blank" href="https://shipsy.io/careers/">Careers Page.</a></p>
<h2 id="heading-acknowledgments-and-contributions">Acknowledgments and Contributions</h2>
<p>As an effort towards learning and skill development, we have regular “Tech-A-Break” sessions at Shipsy. In these sessions, our team members exchange notes on ongoing innovation and optimizations. </p>
<p>This write-up stems from a recent Tech-A-Break session on Routing Playground.</p>
<p>Contributions: Sahil Arora, Aman Ruhela, Rajat Kumar, Krishna Yadav, and Nanubala Gnana Sai.</p>
]]></content:encoded></item><item><title><![CDATA[The Philosophy of Software Design - The Way We Build at Shipsy]]></title><description><![CDATA[Shipsy is a SaaS-based smart logistics management platform. We aim at solving the logistics challenges and operational pain points with intuitive and robust software solutions.
Being an agile organization with an intense focus on innovation entails d...]]></description><link>https://engineering.shipsy.io/the-philosophy-of-software-design-the-way-we-build-at-shipsy</link><guid isPermaLink="true">https://engineering.shipsy.io/the-philosophy-of-software-design-the-way-we-build-at-shipsy</guid><category><![CDATA[Design]]></category><category><![CDATA[software design]]></category><category><![CDATA[software development]]></category><category><![CDATA[software architecture]]></category><category><![CDATA[software]]></category><dc:creator><![CDATA[Arya Bharti]]></dc:creator><pubDate>Thu, 05 May 2022 06:55:29 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1652256082797/bRvdGzLmX.jpg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Shipsy is a SaaS-based smart logistics management platform. We aim at solving the logistics challenges and operational pain points with intuitive and robust software solutions.</p>
<p>Being an agile organization with an intense focus on innovation entails designing and creating future-proof technological solutions that scale, evolve, and stay relevant.</p>
<p>However, delivering across such expectations requires an impeccably robust codebase, which is readable, maintainable, and intuitive for years down the lane.</p>
<p>We aimed at creating a codebase that can serve as a reference for the new developers and “explains itself” even when the original developer is not around. 
This is also called future-proofing, and here is how we energized the efforts at Shipsy.</p>
<h2 id="heading-problem-statement-can-software-designing-be-future-proof">Problem Statement: Can Software Designing Be Future-Proof?</h2>
<p>Future-proofing has many different faces and definitions based on the business use cases, such as:</p>
<ul>
<li>Preserving code readability and maintainability even after the original developer has left the organization</li>
<li>Ensuring that code is not clunky for future usage, extensions, or upgrades</li>
<li>Developing a system design that is effortless relevant for every stakeholder</li>
</ul>
<p>In our pursuit, we zeroed down on the fundamentals - problem decomposition, as outlined by the industry stalwart John Ousterhout.</p>
<p>Any computer software is one big problem decomposition challenge - the number of modules, functionalities of these modules, documentation, dependencies, and the overall complexity of the functional system.</p>
<p>As the clients suggest changes or ask for more features, there is hardly the time to play by the rules. So, code refactoring or best practices tend to take the back seat, especially in startups. </p>
<p>Then, how can we make our code future-proof?</p>
<h2 id="heading-solution-reduce-system-complexity">Solution: Reduce System Complexity</h2>
<p>System complexities affect developer productivity in many ways, even if it is not apparent from the very beginning. Understanding the working of a piece of code, lots of efforts for implementing small changes, and localizing all the points where a single change reflects - are some symptoms of complex systems.</p>
<p>As the software complexity increases, it becomes more vulnerable to bugs and delays in development. </p>
<p>Software complexities are of three types:</p>
<ul>
<li>Change Amplification - Any simple change requires changes in many places, as shown below:</li>
</ul>
<p>Bad Example:</p>
<pre><code><span class="hljs-comment">// file1.js</span>
<span class="hljs-keyword">if</span> (direction <span class="hljs-operator">=</span><span class="hljs-operator">=</span><span class="hljs-operator">=</span> <span class="hljs-string">'UP'</span>) {
 <span class="hljs-comment">// ....</span>
}

<span class="hljs-comment">// file2.js</span>
<span class="hljs-keyword">if</span> (direction <span class="hljs-operator">=</span><span class="hljs-operator">=</span><span class="hljs-operator">=</span> <span class="hljs-string">'UP'</span>) {
 <span class="hljs-comment">// ....</span>
}
.....
</code></pre><p>Good Example:</p>
<pre><code><span class="hljs-keyword">enum</span> <span class="hljs-title">Direction</span> {
 UP <span class="hljs-operator">=</span> ‘UP’,
 DOWN <span class="hljs-operator">=</span> ‘DOWN’,
 LEFT <span class="hljs-operator">=</span> ‘LEFT’,
 RIGHT <span class="hljs-operator">=</span> ‘RIGHT’
}
<span class="hljs-comment">// file1.js</span>
<span class="hljs-keyword">if</span> (direction <span class="hljs-operator">=</span><span class="hljs-operator">=</span><span class="hljs-operator">=</span> Direction.UP) {
 <span class="hljs-comment">// ....</span>
}

<span class="hljs-comment">// file2.js</span>
<span class="hljs-keyword">if</span> (direction <span class="hljs-operator">=</span><span class="hljs-operator">=</span><span class="hljs-operator">=</span> Direction.UP) {
 <span class="hljs-comment">// ....</span>
}
</code></pre><ul>
<li>Cognitive Load - Developers need to carry a lot of information to complete the task. This increases the chances that they might miss something leading to bugs and delays in development. </li>
</ul>
<p>As shown in the above example, the shared enum variable holds the direction and each page references that variable.</p>
<ul>
<li>Unknown Unknowns - There's important information you need to know before making a change, but it is not obvious where to find it or even if it is needed. </li>
</ul>
<p>As shown in the following example, the shared enum variable has the “string” type. However, few files use the “number” type.</p>
<pre><code><span class="hljs-keyword">enum</span> <span class="hljs-title">Direction</span> {
 UP <span class="hljs-operator">=</span> <span class="hljs-string">'UP'</span>,
 DOWN <span class="hljs-operator">=</span> <span class="hljs-string">'DOWN'</span>,
 LEFT <span class="hljs-operator">=</span> <span class="hljs-string">'LEFT'</span>,
 RIGHT <span class="hljs-operator">=</span> <span class="hljs-string">'RIGHT'</span>
}
<span class="hljs-comment">// file1.js</span>
<span class="hljs-keyword">if</span> (direction <span class="hljs-operator">=</span><span class="hljs-operator">=</span><span class="hljs-operator">=</span> Direction.UP) {
 <span class="hljs-comment">// ....</span>
}

<span class="hljs-comment">// file2.js</span>
<span class="hljs-keyword">if</span> (direction <span class="hljs-operator">=</span><span class="hljs-operator">=</span><span class="hljs-operator">=</span> Direction.UP) {
 <span class="hljs-comment">// ....</span>
}

<span class="hljs-comment">// file3.js</span>
<span class="hljs-comment">// direction holds numbers instead of string</span>
<span class="hljs-comment">// for e.g up -&gt; 1, down -&gt; -1 left -&gt; -1, right -&gt; 1</span>
<span class="hljs-keyword">if</span> (direction <span class="hljs-operator">=</span><span class="hljs-operator">=</span><span class="hljs-operator">=</span> Direction.UP) {
 <span class="hljs-comment">//</span>
}
</code></pre><p>Unknown unknowns are the most critical type of complexity, as in this case, you don’t even know that there is an underlying vulnerability or risk in your code.</p>
<p>Generally, in agile organizations, such as startups, software development is tactical, meaning that during development, the main focus is on faster deliveries and accomplishing the intended functionality via shortcuts. As there is no specific focus on long-term goals, there is generally no planning (strategic development) for code refactoring.</p>
<p>Ideally, software development must be strategic, keeping the long-term goals, such as maintainability, in mind. However, the more tactical a developer becomes, the more time they take in developing software, as shown in the following graph:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1651670496675/kCppaK6bl.png" alt="software graph.png" /></p>
<p><a target="_blank" href="https://speakerdeck.com/mrinalwadhwa/fighting-complexity-in-elixir?slide=23">Source</a></p>
<p>Hence, we started to look beyond the “working code” and approached code refactoring strategically, such as specific projects that are well matured. This helped us balance the strategic and tactical development goals and steered us towards effortless yet agile system development.</p>
<p>Here are three things that help us reduce system complexities consistently.</p>
<ol>
<li><h3 id="heading-modular-design">Modular Design</h3>
</li>
</ol>
<p>Addressing complexities, one at a time.</p>
<p>We create small and independent modules that reduce complexity by ridding dependencies. </p>
<p>However, there is a catch - independent doesn’t mean more modules.
We implement independence in our modular design by ensuring that the implementation details in the service class of one module are not visible to another. Also, we use encapsulation so that the service class shows fewer implementation details for layers of abstraction.</p>
<p>Below, we share key considerations for concise and strategic modular design:</p>
<ul>
<li>Use default class for common behavior</li>
<li>Avoid shallow modules as they increase the number of modules, which yet again adds to the overall complexity, as invoking any one of them is not easy.</li>
</ul>
<p>Further, the shallow modules tend to be complex because they include functionalities from other modules as well.</p>
<p>Take a look at the following examples for a better understanding.</p>
<p>Bad Example:</p>
<pre><code>export <span class="hljs-keyword">abstract</span> <span class="hljs-keyword">class</span> <span class="hljs-title">Inquiry</span> {
 <span class="hljs-function"><span class="hljs-keyword">abstract</span> <span class="hljs-title">createInquiry</span>(<span class="hljs-params">organisationId: <span class="hljs-keyword">string</span>, <span class="hljs-keyword">params</span>: CreateInquiryDto</span>)</span>;
 <span class="hljs-function"><span class="hljs-keyword">abstract</span> <span class="hljs-title">sendInquiry</span>(<span class="hljs-params">organisationId: <span class="hljs-keyword">string</span>, <span class="hljs-keyword">params</span>: SendInquiryDto</span>)</span>;
 <span class="hljs-function"><span class="hljs-keyword">abstract</span> <span class="hljs-title">searchInquiry</span>(<span class="hljs-params">organisationId: <span class="hljs-keyword">string</span>, <span class="hljs-keyword">params</span>: SearchInquiryDto</span>)</span>;
 <span class="hljs-function"><span class="hljs-keyword">abstract</span> <span class="hljs-title">searchByReferenceNumber</span>(<span class="hljs-params">organisationId: <span class="hljs-keyword">string</span>, <span class="hljs-keyword">params</span>: SearchInquiryDto</span>)</span>;
 <span class="hljs-function"><span class="hljs-keyword">abstract</span> <span class="hljs-title">updateDeadline</span>(<span class="hljs-params">organisationId: <span class="hljs-keyword">string</span>, <span class="hljs-keyword">params</span>: UpdateInquiryDeadlineDto</span>)</span>;
 <span class="hljs-function"><span class="hljs-keyword">abstract</span> <span class="hljs-title">getInquiryDetailsForShipper</span>(<span class="hljs-params">organisationId: <span class="hljs-keyword">string</span>, <span class="hljs-keyword">params</span>: FetchInquiryDetailsDto</span>)</span>;
 <span class="hljs-function"><span class="hljs-keyword">abstract</span> <span class="hljs-title">getInquiryDetailsForFF</span>(<span class="hljs-params">organisationId: <span class="hljs-keyword">string</span>, <span class="hljs-keyword">params</span>: FetchInquiryDetailsDto</span>)</span>;
 <span class="hljs-function"><span class="hljs-keyword">abstract</span> <span class="hljs-title">fetchInquiry</span>(<span class="hljs-params">organisationId: <span class="hljs-keyword">string</span>, <span class="hljs-keyword">params</span>: FetchInquiryDetailsDto</span>)</span>;
 <span class="hljs-function"><span class="hljs-keyword">abstract</span> <span class="hljs-title">updateInquiry</span>(<span class="hljs-params">organisationId: <span class="hljs-keyword">string</span>, <span class="hljs-keyword">params</span>: UpdateInquiryDto</span>)</span>;
}
</code></pre><p>Deep modules work the best for light and clean interfaces, as most of the information is hidden, even if there is a lot of functionality in a method.</p>
<p>Good Example</p>
<pre><code>export <span class="hljs-keyword">abstract</span> <span class="hljs-keyword">class</span> <span class="hljs-title">Inquiry</span> {
 <span class="hljs-function"><span class="hljs-keyword">abstract</span> <span class="hljs-title">create</span>(<span class="hljs-params">organisationId: <span class="hljs-keyword">string</span>, <span class="hljs-keyword">params</span>: CreateInquiryDto</span>)</span>;
 <span class="hljs-function"><span class="hljs-keyword">abstract</span> <span class="hljs-title">send</span>(<span class="hljs-params">organisationId: <span class="hljs-keyword">string</span>, <span class="hljs-keyword">params</span>: SendInquiryDto</span>)</span>;
 <span class="hljs-function"><span class="hljs-keyword">abstract</span> <span class="hljs-title">search</span>(<span class="hljs-params">organisationId: <span class="hljs-keyword">string</span>, <span class="hljs-keyword">params</span>: SearchInquiryDto</span>)</span>;
 <span class="hljs-function"><span class="hljs-keyword">abstract</span> <span class="hljs-title">update</span>(<span class="hljs-params">organisationId: <span class="hljs-keyword">string</span>, <span class="hljs-keyword">params</span>: UpdateInquiryDto</span>)</span>;
 <span class="hljs-function"><span class="hljs-keyword">abstract</span> <span class="hljs-title">fetch</span>(<span class="hljs-params">organisationId: <span class="hljs-keyword">string</span>, <span class="hljs-keyword">params</span>: FetchInquiryDetailsDto</span>)</span>;
 <span class="hljs-function"><span class="hljs-keyword">abstract</span> <span class="hljs-title">delete</span>(<span class="hljs-params">organisationId: <span class="hljs-keyword">string</span>, <span class="hljs-keyword">params</span>: DeleteInquiryDto</span>)</span>;
}
</code></pre><ol>
<li><h3 id="heading-code-documentation">Code Documentation</h3>
</li>
</ol>
<p>Code documentation or commenting is essential to make the code more understandable and readable for future references and correlations.</p>
<p>While good commenting can overcome the unknown unknowns and cognitive load challenges, over-commented code makes things complex.</p>
<p>Below, we share some key pointers to keep in mind for code documentation.</p>
<p>Good comments:</p>
<ul>
<li>Legal comments</li>
<li>TODO comments</li>
<li>Explain the intent of the coder regarding some important implementation, as shown in the following screenshots</li>
</ul>
<pre><code><span class="hljs-comment">/**
* <span class="hljs-doctag">@summary</span> dump inquiry and bid details to rate_master when bid is sent for an inquiry
* <span class="hljs-doctag">@description</span>
* 1. find port and carrier details to set db internal identifiers
* 2. separate charges with and without container type, size from bid charge details and
*    prepare them into rate_master charge format
* 3. check if revised bid is sent
* 4. for each Inquiry container type,size insert/update rate master object
* <span class="hljs-doctag">@param</span> {string} organisationId [Organization identifier]
* <span class="hljs-doctag">@param</span> {Users} user [user details]
* <span class="hljs-doctag">@param</span> {Inquiry} inquiry [inquiry details]
* <span class="hljs-doctag">@param</span> {Bids} bid [bid details]
* <span class="hljs-doctag">@param</span> {OptionsType} options [extra properties]
*/</span>
<span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">addInquiryBidInRateMaster</span>(<span class="hljs-params">
 organisationId: <span class="hljs-keyword">string</span>,
 user: Users,
 inquiry: Inquiry,
 bid: Bids,
 options: OptionsType = {}
</span>) </span>{
}
</code></pre><p>Bad comments:</p>
<ul>
<li>They are noise comments that offer redundant information, such as explaining the names of functions even when they are easily understood without comments.</li>
<li>Commented out code creates confusion and brings down the code readability.</li>
<li>The parameter definition comments used in a codebase or mandated comments also affect code readability.</li>
</ul>
<pre><code><span class="hljs-comment">// Don’t repeat the code</span>
<span class="hljs-comment">// get time diff</span>
let timeDiff <span class="hljs-operator">=</span> currentTime <span class="hljs-operator">-</span> queryStartTime;
<span class="hljs-comment">// remove milliseconds</span>
const milliseconds <span class="hljs-operator">=</span> Math.round(timeDiff <span class="hljs-operator">%</span> <span class="hljs-number">1000</span>);
timeDiff <span class="hljs-operator">/</span><span class="hljs-operator">=</span> <span class="hljs-number">1000</span>;
const <span class="hljs-literal">seconds</span> <span class="hljs-operator">=</span> Math.round(timeDiff <span class="hljs-operator">%</span> <span class="hljs-number">60</span>);
<span class="hljs-comment">// remove seconds from the date</span>
timeDiff <span class="hljs-operator">=</span> Math.floor(timeDiff <span class="hljs-operator">/</span> <span class="hljs-number">60</span>);
<span class="hljs-comment">// get minutes</span>
const <span class="hljs-literal">minutes</span> <span class="hljs-operator">=</span> Math.round(timeDiff <span class="hljs-operator">%</span> <span class="hljs-number">60</span>);
<span class="hljs-comment">// remove minutes from the date</span>
timeDiff <span class="hljs-operator">=</span> Math.floor(timeDiff <span class="hljs-operator">/</span> <span class="hljs-number">60</span>);
<span class="hljs-comment">// get hours</span>
const <span class="hljs-literal">hours</span> <span class="hljs-operator">=</span> Math.round(timeDiff <span class="hljs-operator">%</span> <span class="hljs-number">24</span>);
<span class="hljs-comment">// remove hours from the date</span>
</code></pre><ol>
<li><h3 id="heading-function-and-variable-naming">Function and Variable Naming</h3>
</li>
</ol>
<p>Every good function or variable name is a combination of three things: Prefix+Action+Context.</p>
<p>Prefix - It is applicable when code is returning boolean values, such as:</p>
<ul>
<li><code>is</code> - current state of the context, for example <code>isMember</code> or <code>isAdmin</code></li>
<li><code>has</code> - current context possesses a certain value, for example <code>hasPermission</code></li>
<li><code>should</code> - positive conditional statement coupled with an action, for example <code>shouldUpdateEntity</code></li>
</ul>
<p>Action - It is the verb part of your function name, such as:</p>
<ul>
<li><code>get</code> - accesses data, for example <code>getStatusCount</code></li>
<li><code>set</code> - sets variable with value, for example <code>setUserRole</code></li>
<li><code>reset</code> - reset variable with an initial value, for example <code>resetItems</code></li>
<li><code>fetch</code> - requests data usually network requests, for example <code>fetchPosts</code></li>
<li><code>remove</code> - removes something from somewhere, for example <code>removeFilter</code></li>
<li><code>delete</code> - completely remove the existence, for example <code>deletePost</code></li>
<li><code>compose</code> - creates new data from existing data, for example <code>composeDisplayAddress(place, area, pincode)</code></li>
<li><code>handle</code> - handles action, for example <code>handleButtonClick</code></li>
</ul>
<p>Context - It is the expected data type that a function operates on, such as:</p>
<p>Things to keep in mind during function and variable naming:</p>
<ul>
<li>The name should be intuitive and descriptive</li>
<li>Avoid contractions or abbreviations that are confusing</li>
<li>Avoid context duplication</li>
<li>Pay attention to singular and plural names</li>
</ul>
<p>Bad Example:</p>
<pre><code>const isExtraServicesSelected <span class="hljs-operator">=</span> lodash.difference(inquiry.origin.services, originServices).<span class="hljs-built_in">length</span> <span class="hljs-operator">&gt;</span> <span class="hljs-number">0</span>;
</code></pre><p>Good Example:</p>
<pre><code>const hasExtraServicesSelected <span class="hljs-operator">=</span> lodash.difference(inquiry.origin.services, originServices).<span class="hljs-built_in">length</span> <span class="hljs-operator">&gt;</span> <span class="hljs-number">0</span>;
</code></pre><p>Bad Example:</p>
<pre><code>const orgModules <span class="hljs-operator">=</span> orgConfig.modules;
</code></pre><p>Good Example:</p>
<pre><code>const organisationModules <span class="hljs-operator">=</span> organisationConfig.modules;
</code></pre><p>Bad Example:</p>
<pre><code><span class="hljs-class"><span class="hljs-keyword">class</span> <span class="hljs-title">country</span> </span>{
 <span class="hljs-keyword">this</span>.country_name = ‘’
}
</code></pre><p>Good Example:</p>
<pre><code>class country {
 <span class="hljs-built_in">this</span>.<span class="hljs-built_in">name</span> <span class="hljs-operator">=</span> ‘’
}
</code></pre><p>Our previous post on <a target="_blank" href="https://engineering.shipsy.io/building-a-clean-and-readable-codebase-key-takeaways">clean coding</a> covers the function and variable naming practices with detailed examples and can be read for a better understanding.</p>
<h1 id="heading-results-paving-the-way-towards-code-excellence">Results: Paving the Way Towards Code Excellence</h1>
<p>Future-proofing the entire codebase is a gradual and continuous process.</p>
<p>At Shipsy, we created a robust, reliable, and thorough engineering handbook for driving clean coding practices. With a renewed focus on better design and maintainable code, we are moving towards a well-established development organization culture that motivates all of us to work smarter and better. </p>
<p>Explore <a target="_blank" href="https://shipsy.io/careers/">careers at Shipsy</a> to be a part of our consistently improving and innovation-oriented developer community.</p>
<h2 id="heading-acknowledgments-and-contributions">Acknowledgments and Contributions</h2>
<p>As an effort towards consistent learning and skill development, we have regular “<a target="_blank" href="https://engineering.shipsy.io/">Tech-A-Break</a>” sessions at Shipsy where our team members exchange notes on specific ideas and topics. 
Contributions: Sahil Arora, Viraj Shah</p>
]]></content:encoded></item><item><title><![CDATA[Migrating to Redshift: Rethinking and Scaling Analytics Efficiently]]></title><description><![CDATA[Analytics is the backbone of smart decision-making and is also one of the core offerings of Shipsy. However, previously we used transactional DBs for analytics that failed to cater to the growing analytics demands we had, simply because they are not ...]]></description><link>https://engineering.shipsy.io/migrating-to-redshift-rethinking-and-scaling-analytics-efficiently</link><guid isPermaLink="true">https://engineering.shipsy.io/migrating-to-redshift-rethinking-and-scaling-analytics-efficiently</guid><category><![CDATA[analytics]]></category><category><![CDATA[data analysis]]></category><category><![CDATA[caching]]></category><category><![CDATA[queue]]></category><dc:creator><![CDATA[Arya Bharti]]></dc:creator><pubDate>Thu, 28 Apr 2022 06:58:46 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1651128346817/edDlHwOA3.jpg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Analytics is the backbone of smart decision-making and is also one of the core offerings of Shipsy. However, previously we used transactional DBs for analytics that failed to cater to the growing analytics demands we had, simply because they are not built for analytics. </p>
<p>Not only this led to an increase in the number of support tickets, but it also slowed down our IOPS, as duplicate requests kept overburdening the servers.
Hence, scaling our client-initiated or on-demand analytics has been on our minds for quite some time now.</p>
<p>When we say scaling, we don’t specifically refer to the number of download requests that we can successfully cater to, but also the concurrent data downloads and the time large downloads take. </p>
<p>There are obvious advantages of scaling data analytics, such as operational efficiency, large data handling, etc., apart from ridding the process queue choking. </p>
<p>Here is a glimpse of how we scaled and optimized our on-demand analytics and what benefits stemmed from our pursuit.</p>
<h2 id="heading-on-demand-analytics-previous-scenario">On-Demand Analytics: Previous Scenario</h2>
<p>Incoming data download requests were processed using scripts. These scripts ran infinitely, looking for pending download requests, as shown in the following image:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1651125183697/tm9sM7j5w.png" alt="Screenshot 2022-04-27 at 5.57.56 PM.png" /></p>
<p>This approach was redundant because it lacked: </p>
<ul>
<li>Concurrency</li>
<li>Efficient Resource Utilization</li>
<li>Visibility</li>
</ul>
<p>Things got irksome because of the following constraints stemming from the process inefficiencies:</p>
<ul>
<li>Size - Clients need a minimum of one month of data for analytics, and some of them were generating mountains of data, such as 2M+ orders in one month.</li>
<li>Requests - Any average day, especially the start and end of a month, used to have a crunching workload, such as 300 to 600 download requests in a day. </li>
<li>Query Time - Such large data downloads can easily take up to 15 - 20 minutes in any transactional DB (the case with us).</li>
<li>Waiting Time - Queue processing meant only one request was processed at one time, and nobody likes to wait patiently for hours for their turn. Users raised multiple requests when the request took too long to get processed.</li>
</ul>
<p>Thus, the queue got choked with requests, most of them being duplicate ones. </p>
<p>This killed operational efficiency and kept the queue running for duplicate requests, leading to resource wastage.</p>
<p>All this called for a solution that could:</p>
<ul>
<li>Fetch large data quickly (Seems impossible with transactional DBs).</li>
<li>Perform concurrent request processing</li>
<li>Cache duplicate requests</li>
</ul>
<h2 id="heading-what-did-we-do">What Did We Do?</h2>
<p>We addressed these inefficiencies by migrating to Redshift, which is specifically built for analytics</p>
<h3 id="heading-migrating-to-redshift">Migrating to Redshift</h3>
<p>Transactional databases are not designed for performance-intensive analytics, and Redshift offered us an obvious advantage in this regard. It could execute operations on piles of data with lightning-fast speed and helped us overcome the performance and wait-time-related issues.</p>
<p>Further, Redshift supports regular SQL queries, so no learning curve was involved.</p>
<p>Next, we share glimpses of our migration to Redshift.</p>
<p>We created ETL pipelines to synchronize the data from our previous transactional database to Redshift:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1651125463604/HtrOVtgA1.jpeg" alt="ETL-Process.jpeg" /></p>
<p><a target="_blank" href="https://databricks.com/glossary/extract-transform-load">Source</a></p>
<p>This pipeline first streams the data to S3 and then uses S3 to update the Redshift database, as shown in the following image:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1651125526200/JdO7wmvST.png" alt="Screenshot 2022-04-28 at 11.13.47 AM.png" /></p>
<p>Next, we used query builder to execute the data analytics queries for the Redshift schema.</p>
<h3 id="heading-download-handler-for-analytics">Download Handler for Analytics</h3>
<p>We leveraged a new service (Download Handler or DH) to achieve Concurrency and Caching.</p>
<p>We refined our DH operations over years of use, which helped us achieve concurrency management out of the box. </p>
<p>Here, we had 2 main blockers:</p>
<ul>
<li>Caching</li>
<li>Analytics</li>
</ul>
<p>To cache any incoming download request, creating a hash key for that specific request was necessary. </p>
<p>For this, we used a cacheManager that did two things:</p>
<ul>
<li>Toggle caching</li>
<li>Generates Hash string</li>
</ul>
<p>Once we have a hash string for a request, we could categorize each request into Main or Duplicate for managing cache.</p>
<p>All the requests with the same hash value were divided into main and duplicate. </p>
<p>Here is the snapshot for an overall idea:</p>
<pre><code>cacheManager (params: any) 
    {
        const { queryToExecute: query, queryParams, organisationId } <span class="hljs-operator">=</span> params;
        const orgWiseDumpCacheConfig <span class="hljs-operator">=</span> config.orgWiseDumpCacheConfig <span class="hljs-operator">|</span><span class="hljs-operator">|</span> {};
        let useCache <span class="hljs-operator">=</span> <span class="hljs-literal">true</span>;
        <span class="hljs-keyword">if</span> (orgWiseDumpCacheConfig[processName]) {
            useCache <span class="hljs-operator">=</span> get(orgWiseDumpCacheConfig, `${processName}.${organisationId}.useCache`, <span class="hljs-literal">false</span>) <span class="hljs-operator">|</span><span class="hljs-operator">|</span> <span class="hljs-literal">false</span>;
        }

        const toRet: any <span class="hljs-operator">=</span> {
            useCache,
        };

        <span class="hljs-keyword">if</span> (useCache) {
            toRet.hash <span class="hljs-operator">=</span> hashingFunction(query) <span class="hljs-operator">+</span> hashingFunction(queryParams, { unorderedArrays: <span class="hljs-literal">true</span> });
            toRet.ttl <span class="hljs-operator">=</span> get(orgWiseDumpCacheConfig, `${processName}.${organisationId}.ttl`, <span class="hljs-number">300000</span>) <span class="hljs-operator">|</span><span class="hljs-operator">|</span> <span class="hljs-number">300000</span>;
        }

        <span class="hljs-keyword">return</span> toRet;
    }
</code></pre><p>Once a request came in, the hash value was checked for the master or slave category, and if the request was a duplicate one, it was linked to the master request. </p>
<p>Once the master request was completed, all the slave requests were also marked complete.</p>
<p>Here is the complete flowchart of the entire process:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1651126671373/Zo4N8ldXj.png" alt="flowchart.png" /></p>
<p>For analytics, we treated Redshift as a normal transactional data source and created a new dbId in DH.</p>
<p>So, an analytics dump handler would look like a normal transactional database dump handler.</p>
<p>Finally, we configured our resources for a specific number of concurrent requests and were able to successfully scale our on-demand analytics.</p>
<h2 id="heading-results-benefits-we-unlocked-with-redshift-implementation">Results: Benefits We Unlocked With Redshift Implementation</h2>
<h3 id="heading-constraint-solved-query-time-reduction">Constraint Solved: Query Time Reduction</h3>
<p>So, if a query for Rider Level Aggregated Data for one month for a client with 2.25 M orders per month used to take 15 to 20 minutes earlier, it now got completed in a few minutes. It was a gain of 20000% in query time!</p>
<h3 id="heading-efficient-non-redundant-and-scalable-analytics">Efficient, Non-Redundant, and Scalable Analytics</h3>
<p>Our implementation analysis showed us the results (as shown in the following graph) that previously, roughly 50% of duplicate requests ran in the queue, draining our resources. </p>
<p>With the successful implementation of DH and Redshift, we scaled our operations and unlocked efficiency and speed.</p>
<p>Consistent innovation alongside consistent improvement - Shipsy’s development culture is a perfect combination of consistency and dynamism. We aim to keep our products, operations, and performance razor-sharp.</p>
<p>To be a part of our developer community, please visit our <a target="_blank" href="https://shipsy.io/careers/">Careers Page</a>. </p>
<p>Acknowledgments and Contributions</p>
<p>As an effort towards consistent learning and skill development, we have regular “Tech-A-Break” sessions at Shipsy where team members exchange notes on specific ideas and topics. This write-up stems from a recent Tech-A-Break session on on-demand analytics, helmed by <em>Shikhar Sharma</em> and <em>Garima Goyal</em>.</p>
]]></content:encoded></item><item><title><![CDATA[Building A Clean And Readable Codebase - Key Takeaways]]></title><description><![CDATA[At Shipsy, we view our code as a reflection of our values, as a community of developers.
However, every developer leaves a distinct mark on their code because of a subjective approach towards coding. While this doesn’t escalate into concern when the ...]]></description><link>https://engineering.shipsy.io/building-a-clean-and-readable-codebase-key-takeaways</link><guid isPermaLink="true">https://engineering.shipsy.io/building-a-clean-and-readable-codebase-key-takeaways</guid><category><![CDATA[clean code]]></category><category><![CDATA[best practices]]></category><category><![CDATA[code]]></category><category><![CDATA[engineering]]></category><category><![CDATA[General Programming]]></category><dc:creator><![CDATA[Arya Bharti]]></dc:creator><pubDate>Mon, 14 Mar 2022 06:19:36 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1647237679402/PaISO6lDs.jpg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>At Shipsy, we view our code as a reflection of our values, as a community of developers.</p>
<p>However, every developer leaves a distinct mark on their code because of a subjective approach towards coding. While this doesn’t escalate into concern when the developer is the only one working on a particular project, when we bring more developers and future-proofing of the codebase into the picture, things change.</p>
<p>Right from naming conventions to code structure and functions to modules - the understanding and perception of “How to write code” changes from developer to developer. This can bring down code readability and affect the overall value an organization can draw from its assets.</p>
<p>Also, we aim at making our code assets to be more standardized, global, neat, and perfectly readable for future-proofing.</p>
<p>Hence, we follow a set of well-cultivated code practices that help us keep our code readable for facilitating an inclusive engineering culture.</p>
<p>Below, we share the key takeaways from our clean coding session to energize similar efforts across the entire industry. </p>
<h2 id="heading-why-clean-code">Why Clean Code?</h2>
<p>When it comes to coding, there are two situations that no developer can deny having encountered:</p>
<p>One, where the developer begins with a crystal clear understanding of all the names, comments, references, etc. 
As the code progresses and assumes a mammoth stance, or becomes complex, the developers tend to lose their grip over the understanding of all the confusing names and variables.</p>
<p>The second one is when a developer doesn’t necessarily play by the rules when it comes to coding. </p>
<p>While this developer might be agile and resourceful, he leaves the organization with a code puzzle that you can take forever to solve or make sense of.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1646835872527/Pq3qfNUYz.jpeg" alt="coding2.jpeg" />
<a target="_blank" href="https://miro.medium.com/max/250/1*NRHPCzMmF2Atx_aTtjiXhA.jpeg">Source</a></p>
<p>It is normal for an organization to have different types of software codebases and update, edit, add, or remove code lines as the need arises. Also, organizational coding is a collaborative approach where more than one person or team is working simultaneously, or alone.</p>
<p>Generally, highly experienced developers or code architects create the code wireframe of the conceptual software system. These code architects also define the coding standards and various architecture standards that need to be followed during the process.</p>
<p>The developer team then works on this wireframe keeping the standards and software development specifications in mind.</p>
<p>As a number of developers are working on the different components of the software, it is inevitable for the code to have distinct peculiarities in terms of naming conventions, variables, code calls, parameters, etc.</p>
<p>This leads to inconsistencies, redundancies, and clunky code that is hard to read, debug and maintain keeping the future requirements in mind.</p>
<p>Further, every developer can make certain assumptions:</p>
<ul>
<li>This function name is so clear and intuitive, anyone can tell what is being done here</li>
<li>Commenting on every function will make my code more readable and clearer</li>
<li>I can clearly use this function call to bypass a whole page of standard routine for making my job easier</li>
<li>I can assist whenever there is a problem; after all, I am not leaving the organization anytime sooner</li>
</ul>
<p>But last-minute fixes, debugging, demand escalation, and an ever-expanding set of client demands affect the code readability.</p>
<p>The code is now a mess with:</p>
<ul>
<li>Variable names that no longer make sense</li>
<li>A chain of comments that seem to convey a different meaning to every other developer </li>
<li>A number of functions that look redundant or “not required”</li>
</ul>
<p>All of them affect the code readability severely and make code upgrades, remodels, changes, and reuse extremely difficult.</p>
<p>The impact of bad coding practices goes beyond readability as it brings down productivity as shown below:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1646836296520/uOD_3yAzX.png" alt="image.png" />
<a target="_blank" href="https://www.informit.com/articles/article.aspx?p=1235624&amp;seqNum=3">Source</a></p>
<h2 id="heading-bootstrapping-the-change">Bootstrapping The Change</h2>
<p>Our main focus is on the usual suspects:</p>
<ul>
<li>Variables</li>
<li>Functions</li>
<li>Formatting</li>
<li>Comments</li>
</ul>
<p>No matter how clean of a code we start with, we eventually end up with messy modules that keep hiding inside the millions of code lines. While this might not be that irksome to some, they can wreak havoc when it comes to refining or editing the code in the light of new client demands.</p>
<p>Many times, these revisions happen in the absence of the original developer, or after sizable time has passed since the project delivery. In that case, code readability and standardization become extremely significant for the successful completion of these revisions.</p>
<p>This is also important from the organizational perspective as internal applications and operations also need a clean codebase for staying future-proof even when the employees leave the organization.</p>
<p>Hence, we targeted the above-mentioned four key areas to build a clean and readable codebase, and below we share the highlights for the same.</p>
<h2 id="heading-clean-coding-4-key-considerations-for-an-easy-yet-significant-initiation">Clean Coding: 4 Key Considerations for an Easy yet, Significant Initiation</h2>
<p>Given below, are the glimpses from our playbook for driving a clean code practice that aims at creating a global codebase.</p>
<h3 id="heading-variables">Variables</h3>
<p>#1 - Meaningful Variable Names</p>
<p>Make sure that the variable names are intention-revealing and meaningful. The basic rule of thumb is to check whether a name requires a comment to reveal its intention.</p>
<p>If yes, then you need to change it.</p>
<p>Bad Example:</p>
<pre><code>const hawbNo <span class="hljs-operator">=</span> NON_NEGOTIABLE_FIELD_MASTER_CODES.HOUSE_AWB_NUMBER;
const date <span class="hljs-operator">=</span> moment(booking.sailingDate).format(<span class="hljs-string">"YYYY/MM/DD"</span>);
</code></pre><p>Good Example:</p>
<pre><code>const houseAWBNumber <span class="hljs-operator">=</span> NON_NEGOTIABLE_FIELD_MASTER_CODES.HOUSE_AWB_NUMBER;
const sailingDate <span class="hljs-operator">=</span> moment(booking.sailingDate).format(<span class="hljs-string">"YYYY/MM/DD"</span>);
</code></pre><p>#2 - Avoid Misinformation</p>
<p>Misinformative names always create problems and cost you your time and productivity. Inconsistent spelling is also misinformation. </p>
<p>Here the function “getById” returns carrier code in the key “id”. </p>
<p>Hence, anybody who is calling this function will have no idea about this and will surely end up with lots of errors. </p>
<p>Bad Example:</p>
<pre><code>class CarrierService extends CarrierInterface {
  async getById( carrierId: <span class="hljs-keyword">string</span> <span class="hljs-operator">|</span> null, .... ) {
    let filters: KeyWithNullableValue <span class="hljs-operator">=</span> {
        code: carrierId, <span class="hljs-comment">// here carrierId is attached to the code. even function name is getById</span>
    };
    functionLogic()
    <span class="hljs-keyword">return</span> {
      id: carrier.<span class="hljs-built_in">code</span>, <span class="hljs-comment">// here it passes code in the key "id"</span>
      name: carrier.<span class="hljs-built_in">name</span>,
    }
  }
}
</code></pre><p>Good Example:</p>
<pre><code>class CarrierService extends CarrierInterface {
  async getById( carrierId: <span class="hljs-keyword">string</span> <span class="hljs-operator">|</span> null, .... ) {
    let filters: KeyWithNullableValue <span class="hljs-operator">=</span> {
        id: carrierId,
    };
    functionLogic()
    <span class="hljs-keyword">return</span> {
      code: carrier.<span class="hljs-built_in">code</span>,
      name: carrier.<span class="hljs-built_in">name</span>,
    }
  }
}
</code></pre><p>#3 - Avoid Noise Words and Non-Distinguishable Names</p>
<p>Further, it is important to make a meaningful distinction by avoiding the noise words, to distinguish between variables like data or info.  </p>
<p>You might be tempted to change one name in an arbitrary manner when you need to use the same name to refer to two different things in the same scope. </p>
<p>However, avoid misspellings at all costs as it degrades the code readability.</p>
<p>Some examples of non-distinctive names are:</p>
<pre><code>booking <span class="hljs-keyword">or</span> bookingData
<span class="hljs-type">money</span> <span class="hljs-keyword">or</span> moneyAmount
<span class="hljs-keyword">user</span> <span class="hljs-keyword">or</span> userInfo
</code></pre><p>#4 - No Generic Names</p>
<p>Avoid using generic names, such as sum, total, count, etc. and add prefixes to them for boosting code readability.</p>
<p>Bad Example:</p>
<pre><code><span class="hljs-function"><span class="hljs-keyword">async</span> <span class="hljs-title">inquiryBidsCount</span> (<span class="hljs-params">....</span>)</span> {
  <span class="hljs-keyword">let</span> count = <span class="hljs-number">0</span>;
  functionLogic();
  <span class="hljs-keyword">return</span> count;
}
</code></pre><p>Good Example:</p>
<pre><code><span class="hljs-function"><span class="hljs-keyword">async</span> <span class="hljs-title">inquiryBidsCount</span> (<span class="hljs-params">....</span>)</span> {
  <span class="hljs-keyword">let</span> bidsCount = <span class="hljs-number">0</span>;
  functionLogic();
  <span class="hljs-keyword">return</span> bidsCount;
}
</code></pre><p>#5 - Don’t Describe Constants in Comments</p>
<p>Instead of describing a constant in comments, give a proper name to it.</p>
<p>Bad Example:</p>
<pre><code><span class="hljs-keyword">if</span> (request.processingCount <span class="hljs-operator">=</span><span class="hljs-operator">=</span> <span class="hljs-number">5</span>) { <span class="hljs-comment">// 5 is the max retry count</span>
  doStuff();
}
</code></pre><p>Good Example:</p>
<pre><code>const MAX_RETRY_COUNT <span class="hljs-operator">=</span> <span class="hljs-number">5</span>;
<span class="hljs-keyword">if</span> (request.processingCount <span class="hljs-operator">=</span><span class="hljs-operator">=</span> MAX_RETRY_COUNT) {
  doStuff();
}
</code></pre><p>#6 - Use Prefixes Smartly</p>
<p>Sometimes prefixes are a necessity and not an option.</p>
<p>For example, the address has many components:</p>
<ul>
<li>firstName</li>
<li>lastName</li>
<li>Street</li>
<li>City </li>
<li>Country</li>
</ul>
<p>Now, if you use only <code>“State”</code> variable in a method, some other person reading the code might not get the idea that it was a part of an address. You can remedy the situation by adding the <code>“address”</code> prefix to these names, which will always convey the right idea. </p>
<h3 id="heading-functions">Functions</h3>
<p>#1 - Limit the Arguments</p>
<p>Try to use the minimum number of arguments. In our codebase, we use options as one argument. Always try to pass all optional arguments in it. </p>
<p>Bad Example:</p>
<pre><code>async createBookingRequest(carrierCode?: <span class="hljs-keyword">string</span>, ... , options: OptionsType<span class="hljs-operator">&lt;</span>{}<span class="hljs-operator">&gt;</span>) {
  <span class="hljs-keyword">if</span> (carrierCode) {
    <span class="hljs-comment">//functionLogic()</span>
  }
  <span class="hljs-comment">//functionLogic()</span>
  <span class="hljs-keyword">return</span>;
}
</code></pre><p>Good Example:</p>
<p>In the above example, carrierCode is optional, so we can pass this in options as well. This will decrease the number of arguments without changing anything in the code.</p>
<pre><code>async createBookingRequest(... , options: OptionsType<span class="hljs-operator">&lt;</span>{carrierCode?: <span class="hljs-keyword">string</span>}<span class="hljs-operator">&gt;</span>) {
  const { carrierCode } <span class="hljs-operator">=</span> options;
  <span class="hljs-keyword">if</span> (carrierCode) {
    <span class="hljs-comment">//functionLogic()</span>
  }
  <span class="hljs-comment">//functionLogic()</span>
  <span class="hljs-keyword">return</span>;
}
</code></pre><p>#2 - Naming Conventions</p>
<p>For creating a clean code, we adhere to the naming conventions for variables, when it comes to Functions as well. </p>
<p>#3 - Structure of A Function</p>
<p>Keep the functions small and stick to the “one-task-per-function” dictum to ensure the conciseness of functions.</p>
<p>Bad Example:</p>
<pre><code><span class="hljs-function"><span class="hljs-keyword">async</span> <span class="hljs-title">createShippingInstruction</span>(<span class="hljs-params">isDraft: boolean, ...</span>)</span> {
  <span class="hljs-keyword">if</span>(isDraft) {
    <span class="hljs-comment">// validations for draft request code</span>
    <span class="hljs-comment">// functionLogic()</span>
  } <span class="hljs-keyword">else</span> {
    <span class="hljs-comment">// validations for non draft request code</span>
    <span class="hljs-comment">// functionLogic()</span>
  }
  <span class="hljs-keyword">return</span>;
}
</code></pre><p>Good Example:</p>
<pre><code><span class="hljs-function"><span class="hljs-keyword">async</span> <span class="hljs-title">createShippingInstruction</span>(<span class="hljs-params">isDraft: boolean, ...</span>)</span> {
  <span class="hljs-keyword">if</span>(isDraft) {
    <span class="hljs-keyword">await</span> validateDraftShippingInstructionRequest();
    <span class="hljs-comment">// functionLogic()</span>
  } <span class="hljs-keyword">else</span> {
    <span class="hljs-keyword">await</span> validateNonDraftShippingInstructionRequest();
    <span class="hljs-comment">// functionLogic()</span>
  }
  <span class="hljs-keyword">return</span>;
}
</code></pre><p>#4 - Avoid using lots of conditionals</p>
<p>Bad Example:</p>
<pre><code>const myFunc <span class="hljs-operator">=</span> (dep: <span class="hljs-keyword">string</span>) <span class="hljs-operator">=</span><span class="hljs-operator">&gt;</span> {
  const myVar <span class="hljs-operator">=</span> (() <span class="hljs-operator">=</span><span class="hljs-operator">&gt;</span> {
    switch(dep) {
      case <span class="hljs-string">'a'</span>:
        <span class="hljs-keyword">return</span> <span class="hljs-string">'aahoo'</span>;
      case <span class="hljs-string">'b'</span>:
        <span class="hljs-keyword">return</span> <span class="hljs-string">'deadman'</span>;
      default:
        <span class="hljs-keyword">return</span> <span class="hljs-string">'spooky'</span>;
    }
  })();
  <span class="hljs-keyword">return</span> myVar;
};

const myFunc <span class="hljs-operator">=</span> (dep: <span class="hljs-keyword">string</span>) <span class="hljs-operator">=</span><span class="hljs-operator">&gt;</span> {
  let myVar <span class="hljs-operator">=</span> <span class="hljs-string">'spooky'</span>;
  <span class="hljs-keyword">if</span> (dep <span class="hljs-operator">=</span><span class="hljs-operator">=</span><span class="hljs-operator">=</span> <span class="hljs-string">'a'</span>) {
    myVar <span class="hljs-operator">=</span> <span class="hljs-string">'aahoo'</span>;
  }
  <span class="hljs-keyword">if</span> (dep <span class="hljs-operator">=</span><span class="hljs-operator">=</span><span class="hljs-operator">=</span> <span class="hljs-string">'b'</span>) {
    myVar <span class="hljs-operator">=</span> <span class="hljs-string">'deadman'</span>;
  }
  <span class="hljs-keyword">return</span> myVar
};

const requestsHandler <span class="hljs-operator">=</span> (requestType: <span class="hljs-keyword">string</span>) <span class="hljs-operator">=</span><span class="hljs-operator">&gt;</span> {
  <span class="hljs-keyword">if</span> (requestType <span class="hljs-operator">=</span><span class="hljs-operator">=</span><span class="hljs-operator">=</span> <span class="hljs-string">'booking'</span>){
    <span class="hljs-keyword">return</span> <span class="hljs-built_in">this</span>.getBookingObject();
  } <span class="hljs-keyword">else</span> <span class="hljs-keyword">if</span> (requestType <span class="hljs-operator">=</span><span class="hljs-operator">=</span><span class="hljs-operator">=</span> <span class="hljs-string">'si'</span>){
    <span class="hljs-keyword">return</span> <span class="hljs-built_in">this</span>.getSIObject();
  } <span class="hljs-keyword">else</span> <span class="hljs-keyword">if</span> (requestType <span class="hljs-operator">=</span><span class="hljs-operator">=</span><span class="hljs-operator">=</span> <span class="hljs-string">'vgm'</span>){
    <span class="hljs-keyword">return</span> <span class="hljs-built_in">this</span>.getVGMObject();
  }
};
</code></pre><p>Good Example:</p>
<pre><code>const myFunc <span class="hljs-operator">=</span> (dep: <span class="hljs-keyword">string</span>) <span class="hljs-operator">=</span><span class="hljs-operator">&gt;</span> {
  const map <span class="hljs-operator">=</span> {
    a: <span class="hljs-string">'aahoo'</span>,
    b: <span class="hljs-string">'deadman'</span>,
    <span class="hljs-comment">// ... goes on ...</span>
  };
  const myVar <span class="hljs-operator">=</span> dep ? map[dep] : <span class="hljs-string">'spooky'</span>;
};

const requestsHandler <span class="hljs-operator">=</span> (requestType: <span class="hljs-keyword">string</span>) <span class="hljs-operator">=</span><span class="hljs-operator">&gt;</span> {
  const map <span class="hljs-operator">=</span> {
    booking: <span class="hljs-built_in">this</span>.getBookingObject,
    si: <span class="hljs-built_in">this</span>.getSIObject,
    vgm: <span class="hljs-built_in">this</span>.getVGMObject,
  }
  <span class="hljs-keyword">return</span> map[requestType]();
};
</code></pre><p>#5 - Favor Functional Programming</p>
<p>Favor functional programming over imperative programming for making code testing easier.</p>
<p>Bad Example:</p>
<pre><code>const programmerOutput <span class="hljs-operator">=</span> [
  {
    name: <span class="hljs-string">"Uncle Bobby"</span>,
    linesOfCode: <span class="hljs-number">500</span>
  },
  {
    name: <span class="hljs-string">"Suzie Q"</span>,
    linesOfCode: <span class="hljs-number">1500</span>
  },
  {
    name: <span class="hljs-string">"Jimmy Gosling"</span>,
    linesOfCode: <span class="hljs-number">150</span>
  },
  {
    name: <span class="hljs-string">"Gracie Hopper"</span>,
    linesOfCode: <span class="hljs-number">1000</span>
  }
];

let totalOutput <span class="hljs-operator">=</span> <span class="hljs-number">0</span>;

<span class="hljs-keyword">for</span> (let i <span class="hljs-operator">=</span> <span class="hljs-number">0</span>; i <span class="hljs-operator">&lt;</span> programmerOutput.<span class="hljs-built_in">length</span>; i<span class="hljs-operator">+</span><span class="hljs-operator">+</span>) {
  totalOutput <span class="hljs-operator">+</span><span class="hljs-operator">=</span> programmerOutput[i].linesOfCode;
}
</code></pre><p>Good Example:</p>
<pre><code><span class="hljs-string">const</span> <span class="hljs-string">programmerOutput</span> <span class="hljs-string">=</span> [
  {
    <span class="hljs-attr">name:</span> <span class="hljs-string">"Uncle Bobby"</span>,
    <span class="hljs-attr">linesOfCode:</span> <span class="hljs-number">500</span>
  },
  {
    <span class="hljs-attr">name:</span> <span class="hljs-string">"Suzie Q"</span>,
    <span class="hljs-attr">linesOfCode:</span> <span class="hljs-number">1500</span>
  },
  {
    <span class="hljs-attr">name:</span> <span class="hljs-string">"Jimmy Gosling"</span>,
    <span class="hljs-attr">linesOfCode:</span> <span class="hljs-number">150</span>
  },
  {
    <span class="hljs-attr">name:</span> <span class="hljs-string">"Gracie Hopper"</span>,
    <span class="hljs-attr">linesOfCode:</span> <span class="hljs-number">1000</span>
  }
]<span class="hljs-string">;</span>

<span class="hljs-string">const</span> <span class="hljs-string">totalOutput</span> <span class="hljs-string">=</span> <span class="hljs-string">programmerOutput.reduce(</span>
  <span class="hljs-string">(totalLines,</span> <span class="hljs-string">output)</span> <span class="hljs-string">=&gt;</span> <span class="hljs-string">totalLines</span> <span class="hljs-string">+</span> <span class="hljs-string">output.linesOfCode,</span> <span class="hljs-number">0</span><span class="hljs-string">);</span>
</code></pre><p>#6 - Avoid Side-Effects</p>
<p>Ensure that there are no side-effects of a function, such as writing to a file, modifying global variables, and sharing state between objects without structure</p>
<p>Otherwise, in the future, if someone needs to update or edit the code, such side effects can lead to serious outcomes. </p>
<p>Bad Example:</p>
<pre><code><span class="hljs-comment">// Global variable referenced by following function.</span>
<span class="hljs-comment">// If we had another function that used this name, now it'd be an array and it could break it.</span>
let name <span class="hljs-operator">=</span> <span class="hljs-string">"Ryan McDermott"</span>;

<span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">splitIntoFirstAndLastName</span>(<span class="hljs-params"></span>) </span>{
  name <span class="hljs-operator">=</span> name.split(<span class="hljs-string">" "</span>);
}

splitIntoFirstAndLastName();

console.log(name); <span class="hljs-comment">// ['Ryan', 'McDermott'];</span>
</code></pre><p>Good Example:</p>
<pre><code><span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">splitIntoFirstAndLastName</span>(<span class="hljs-params">name</span>) </span>{
  <span class="hljs-keyword">return</span> name.split(<span class="hljs-string">" "</span>);
}

const name <span class="hljs-operator">=</span> <span class="hljs-string">"Ryan McDermott"</span>;
const newName <span class="hljs-operator">=</span> splitIntoFirstAndLastName(name);

console.log(name); <span class="hljs-comment">// 'Ryan McDermott';</span>
console.log(newName); <span class="hljs-comment">// ['Ryan', 'McDermott'];</span>
</code></pre><p>#7 - Miscellaneous Pointers:</p>
<ul>
<li>As we cannot always avoid “switch” statements, try to keep them in a low-level class and never repeat them</li>
<li>Use descriptive names to convey what the function does</li>
<li>Remove all instances of duplicate code</li>
</ul>
<h3 id="heading-formatting">Formatting</h3>
<p>#1 - Variable Declaration</p>
<p>Declare variables as close to their usage as possible. If some variable is used in loops or in conditional statements. They must be either inside that scope or just above that part of the code. </p>
<p>Bad Example:</p>
<pre><code>async getActiveBookingCount() {
  let activeBookingCount <span class="hljs-operator">=</span> <span class="hljs-number">0</span>;
  <span class="hljs-comment">// functionLogic()</span>
  bookings <span class="hljs-operator">=</span> getBookings();
  bookings.forEach((booking) <span class="hljs-operator">=</span><span class="hljs-operator">&gt;</span> {
    <span class="hljs-keyword">if</span>(booking.is_active) {
      activeBookingCount<span class="hljs-operator">+</span><span class="hljs-operator">+</span>;
    }
  });

  <span class="hljs-keyword">return</span> activeBookingCount;
}
</code></pre><p>Good Example:</p>
<pre><code>async getActiveBookingCount() {
  <span class="hljs-comment">// functionLogic()</span>
  bookings <span class="hljs-operator">=</span> getBookings();
  let activeBookingCount <span class="hljs-operator">=</span> <span class="hljs-number">0</span>;
  bookings.forEach((booking) <span class="hljs-operator">=</span><span class="hljs-operator">&gt;</span> {
    <span class="hljs-keyword">if</span>(booking.is_active) {
      activeBookingCount<span class="hljs-operator">+</span><span class="hljs-operator">+</span>;
    }
  });

  <span class="hljs-keyword">return</span> activeBookingCount;
}
</code></pre><p>#2 - Keep Calling and Callee Functions Close</p>
<p>If a function calls another function, keep those functions vertically close in the source file. Ideally, keep the caller right above the callee; wherever possible. We tend to read code from top-to-bottom, like a newspaper. Because of this, make your code read that way. </p>
<p>Bad Example:</p>
<pre><code>aysnc getBidDetails(...) {
}

async getBidDetailsForFF() {
  <span class="hljs-keyword">return</span> await getBidDetails( ..., {...options, bidsView: <span class="hljs-string">'FF'</span>});
}

async getBidDetailsForShipper() {
  <span class="hljs-keyword">return</span> await getBidDetails( ..., {...options, bidsView: <span class="hljs-string">'SHIPPER'</span>});
}

OR

async getBidDetailsForShipper() {
  <span class="hljs-keyword">return</span> await getBidDetails( ..., {...options, bidsView: <span class="hljs-string">'SHIPPER'</span>});
}

aysnc getBidDetails(...) {
}

async getBidDetailsForFF() {
  <span class="hljs-keyword">return</span> await getBidDetails( ..., {...options, bidsView: <span class="hljs-string">'FF'</span>});
}
</code></pre><p>Good Example:</p>
<pre><code>async getBidDetailsForShipper() {
  <span class="hljs-keyword">return</span> await getBidDetails( ..., {...options, bidsView: <span class="hljs-string">'SHIPPER'</span>});
}

async getBidDetailsForFF() {
  <span class="hljs-keyword">return</span> await getBidDetails( ..., {...options, bidsView: <span class="hljs-string">'FF'</span>});
}

aysnc getBidDetails(...) {
}
</code></pre><h3 id="heading-comments">Comments</h3>
<p>Writing a description or summary of the functions is a good thing. But writing comments as make-up for bad code is not. </p>
<p>Like if some function or variable name is not good and we just write some comment to cover that thing. In this case, we always have to use the correct name so that it's self-descriptive. </p>
<p>Bad Example:</p>
<pre><code><span class="hljs-comment">// Check to see if the employee is eligible or not for full benefits</span>
If ((employee.flags <span class="hljs-operator">&amp;</span> HOURLY_FLAG) <span class="hljs-operator">&amp;</span><span class="hljs-operator">&amp;</span> employee.age <span class="hljs-operator">&gt;</span> <span class="hljs-number">65</span>)) {

}
</code></pre><p>Good Example:</p>
<pre><code><span class="hljs-selector-tag">If</span> (<span class="hljs-selector-tag">employee</span><span class="hljs-selector-class">.isEligibleForFullBenefits</span>()) {
}
</code></pre><p>Below, we share some pointers to keep in mind for good and bad commenting practices.</p>
<p><strong>Good or necessary comments:</strong></p>
<ul>
<li>Legal comments</li>
<li>Explanation of intent: Basically sometimes we have to give some useful information about the implementation of the decision we took there.</li>
<li>Clarification: Sometimes we get in a situation where we have to clarify about the return object or we are using any third-party things. Like defining EDIFACT segments' names by comments.</li>
<li>TODO comments</li>
</ul>
<p><strong>Bad Comments:</strong></p>
<ul>
<li>Redundant comments or noise comments, such as defining the function or variable even when you can understand them easily by their names.</li>
<li>Mandated comments, all the parameters’  definition comments that we used in our code base.</li>
<li>Commented out code always creates confusion and degrades the code readability.</li>
</ul>
<h3 id="heading-bonus-tip-error-handling">Bonus Tip - Error Handling</h3>
<p>Thrown errors are a good thing! </p>
<p>They mean the runtime has successfully identified when something in your program has gone wrong and it's letting you know by stopping function execution on the current stack, killing the process (in Node), and notifying you in the console with a stack trace.</p>
<p>Hence, we never ignore the caught errors.</p>
<p>Doing nothing with a caught error will mean you never reacted or fixed it. Logging the error to the console <code>(console.log)</code> isn't much better as oftentimes it can get lost in a sea of things printed to the console. </p>
<p>If you wrap any bit of code in a <code>try/catch</code> it means you think an error may occur there and therefore you should have a plan, or create a code path, for when it occurs. </p>
<p>Bad Example:</p>
<pre><code><span class="hljs-keyword">try</span> {
  functionThatMightThrow();
} <span class="hljs-keyword">catch</span> (<span class="hljs-function"><span class="hljs-keyword">error</span>) </span>{
  console.log(<span class="hljs-function"><span class="hljs-keyword">error</span>)</span>;
}
</code></pre><p>Good Example:</p>
<pre><code><span class="hljs-keyword">try</span> {
  functionThatMightThrow();
} <span class="hljs-keyword">catch</span> (<span class="hljs-function"><span class="hljs-keyword">error</span>) </span>{
  <span class="hljs-comment">// One option (more noisy than console.log):</span>
  console.error(<span class="hljs-function"><span class="hljs-keyword">error</span>)</span>;
  <span class="hljs-comment">// Another option:</span>
  notifyUserOfError(<span class="hljs-function"><span class="hljs-keyword">error</span>)</span>;
  <span class="hljs-comment">// Another option:</span>
  reportErrorToService(<span class="hljs-function"><span class="hljs-keyword">error</span>)</span>;
  <span class="hljs-comment">// OR do all three!</span>
}
</code></pre><h2 id="heading-establishing-a-clean-coding-culture-every-step-is-crucial">Establishing a Clean Coding Culture: Every Step Is Crucial</h2>
<p>While it is easy to elaborate on the clean coding practices on paper, implementing them at an organizational level is a game of patience and time.</p>
<p>At Shipsy, we facilitate a clean coding culture by making it an essential part of our engineering onboarding process.</p>
<p>We also follow the Engineering Guide that we have built from scratch. We keep on updating it with every project to keep every developer on the same page.</p>
<p>Finally, we ensure that our entire community stays abreast of all the latest developments and clean coding practices by organizing regular tech sessions where we share, learn, and innovate in a sustainable manner.</p>
<p>Such organizational measures and initiatives leave no room for discrepancies or a lopsided codebase. </p>
<p>Fostering a clean coding culture at the organization level is an ongoing process and we hope that our inputs and endeavors energize similar efforts across the global Dev community.  </p>
]]></content:encoded></item><item><title><![CDATA[Creating Custom Shopify Libraries: Challenges and Quick Hacks]]></title><description><![CDATA[Code is an amazing man-made wonder, and we, at Shipsy, are in absolute awe of it!
So, to all our coding fellows - “In Code, we Trust!” 

Source
We want to share with you some of the work that we are doing; essentially, how we catered to our multiple ...]]></description><link>https://engineering.shipsy.io/creating-custom-shopify-libraries</link><guid isPermaLink="true">https://engineering.shipsy.io/creating-custom-shopify-libraries</guid><category><![CDATA[shopify]]></category><category><![CDATA[authentication]]></category><category><![CDATA[authorization]]></category><category><![CDATA[JWT]]></category><category><![CDATA[APIs]]></category><dc:creator><![CDATA[Arya Bharti]]></dc:creator><pubDate>Mon, 28 Feb 2022 16:51:45 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1646893380807/Y7ddoPugp.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Code is an amazing man-made wonder, and we, at Shipsy, are in absolute awe of it!</p>
<p>So, to all our coding fellows - “In Code, we Trust!” </p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1645525151442/sxe05NTTg.png" alt="Code Image.png" />
<a target="_blank" href="https://cdn-images-1.medium.com/max/1600/1*dUQuNsBDaBT1SYgpXf6M6Q.png">Source</a></p>
<p>We want to share with you some of the work that we are doing; essentially, how we catered to our multiple Shopify app clients via a single server.</p>
<p>Before we get into the granular details, it is important to understand the driving force behind this. </p>
<p>There are three crucial components of Shopify SDK:</p>
<ul>
<li>Shopify App Bridge</li>
<li>Shopify Auth</li>
<li>Shopify API</li>
</ul>
<p>The Shopify App Bridge connects the client-side with the Shopify server. The Shopify Auth and Shopify API libraries perform the underlying and facilitating tasks, such as communications, authentication, and validation.</p>
<p>Now, both these libraries are stateful, this means that once they have a “state” variable, context object in this case (we will revisit this in later sections), these libraries don’t allow any change in it. </p>
<p>So, once the context object has a client ID, Shopify libraries will not allow you to change it, and thereby, the Shopify libraries only support “one-server-one-app” architecture. </p>
<p>However, we wanted to change that!</p>
<h2 id="heading-problem-statement">Problem Statement</h2>
<p>Achieving a “create once, use infinitely” capability on our Shopify app to ensure that our entire customer base from different domains is able to use the same product.</p>
<p>Essentially, we wanted to make the Shopify libraries “stateless” and allow us to dynamically change the context object as per the incoming connection requests.</p>
<p>Now, this was something like getting your own highway for your specific traveling needs!</p>
<p>Yes, it was nothing short of a Utopia, but Hey! We at Shipsy realized our Utopia!</p>
<p>Presented here, is a walk-through of our entire endeavor and a little glimpse of what it took to get our own highway!</p>
<h2 id="heading-challenges-to-our-approach">Challenges to Our Approach</h2>
<p>Shopify’s API libraries are stateful, which means that all the APIs share a global context object. If you don’t have core libraries, you cannot use this context object as per your wishes.</p>
<p>Before we move ahead, let us have a brief discussion about this context object.</p>
<p>The context object has all the information of the Shopify API key and API Secrets for each app. Both of them play an essential role in authentication and verification via session tokens and JW tokens.</p>
<p>So, the Shopify app libraries have only one context object and in that context object, there is only one combination of API Key and API Secret.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1645526137079/Ma_Zsvl7u.png" alt="carbon.png" />
As this context object is being used everywhere - for sending app requests to the Shopify server and getting responses back from the same - we were not able to use the single server for serving app requests from multiple apps.</p>
<p><em>So, what we did can also help the developer community across the globe. The developers can learn from this approach and leverage the Shopify server to serve multiple apps from a single server.</em></p>
<h2 id="heading-overcoming-context-object-challenges-with-custom-libraries">Overcoming Context Object Challenges With Custom Libraries</h2>
<p>We custom-coded the Shopify API and Shopify Auth libraries.</p>
<p>While the Shopify Auth library for our apps is entirely built from scratch, line-by-line, Shopify API is coded as per the business’ unique needs.</p>
<p>By doing this, we converted Shopify libraries from stateful to stateless libraries!</p>
<p>Hence, whenever a request comes from the Shopify dashboard, or from the Shopify App Client to the server, instead of using Shopify libraries, we use our custom-coded Shopify Auth library.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1645525996369/TGkl6YT2M.png" alt="Screenshot 2022-02-17 at 11.34.20 AM.png" />
Next comes a step that is one of the many fantastic things code can help us developers do!</p>
<p>We have a custom logic that identifies which app is requesting this service.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1645526191937/qAULXs24n.png" alt="carbon (4).png" />
Once we know which app is requesting the connection, we can pass the context object in every function with request-based parameters, and it is no longer global.</p>
<p>Next, a pure function (which is also custom coded), takes the data from this context object parameters and sends it to the Shopify server.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1645526239258/ahRRk0wiO.png" alt="carbon (2).png" />
While a major hurdle was overcome, there was still one consequential challenge left.</p>
<h2 id="heading-how-to-know-which-app-to-serve">How to Know Which App to Serve?</h2>
<p>Whenever we create an app, we set a custom subdomain on the basis of its domain and we tell the app the correct server API so that it hits the right server.</p>
<p>The config file stores the client details, on the basis of which, we figure out which app needs to be served.</p>
<p>Now, we configured our DNS such that all the requests coming on this domain are sent to the Shipsy server.</p>
<p>All the incoming app requests are encountered by the frontend and we identify which app needs server access. Once this is done, all the other communications are then internally handled by custom Shopify Auth and Shopify API.</p>
<p>This Shopify API is handling all the conversations we are having with Shopify, such as any data we require from Shopify, etc. Our custom Shopify Auth handles all the logic on the server-side of the Shopify App.</p>
<p>Now we will talk about the client-side architecture of our Shopify App.</p>
<h2 id="heading-overcoming-the-client-side-challenges-in-custom-library-riddle">Overcoming the Client-Side Challenges in Custom Library Riddle</h2>
<p>For all the pages in the front end, we call the APIs indirectly, as our app is loaded in Shopify, and now we are making API calls, such as soft data, label generation, virtual series generation, etc. there only.</p>
<p>This is because earlier the request authentication and verification were managed by Shipsy.</p>
<p>However, in Shopify's case, authentication is done at Shopify’s frontend and our app backend and frontend.</p>
<p>We know that you might need to read that sentence again. </p>
<p><strong>So, here is a little something to keep you motivated to do so:</strong></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1645525593982/aAoTf7KkaK.jpeg" alt="appImage.jpeg" />
<a target="_blank" href="https://i.pinimg.com/736x/d5/be/57/d5be57f2dd5064d460c0fa48249c3263.jpg">Source</a></p>
<p>Let us try to understand this with the help of an example.</p>
<h2 id="heading-how-we-authenticate-our-frontend-with-our-backend">How We Authenticate Our Frontend With Our Backend?</h2>
<p>Suppose we load a page or data on a page.</p>
<p>Now, before that request is processed, sync is loaded/ordered, and the frontend sends the request to Shopify.</p>
<p>In our backend, our middleware processes the session token, which has many values and details, such as - expiry time, Shop Name, etc. which is a unique identifier in our case.</p>
<p>As soon as the app frontend is loaded, initially, a skeleton is loaded. This skeleton loads the Shopify App Bridge client on the browser of the app user. </p>
<p>Now, every time the user sends a request, we include a session token in its headers, which we get from the Shopify App Bridge. This session token is verified on our Shopify app backend (server) to confirm whether the request is coming from a genuine source or not. </p>
<p>Once we know that the Shopify request is true and valid, the request gets processed normally.  </p>
<p>Next, we discuss how this process generated desirable results for the entire Shipsy ecosystem.</p>
<h2 id="heading-results-of-custom-coding-the-shopify-libraries">Results of Custom Coding the Shopify Libraries</h2>
<h3 id="heading-no-automatic-logouts">No Automatic Logouts</h3>
<p>Earlier, when we used to manage authentication and verification at our end, we used session cookies for the process.</p>
<p>Now, once these session cookies expired, the clients were automatically logged out and that led to redundancy in the user experience.</p>
<p>Now, the authentication and verification are handled by our custom Shopify Auth library and the logouts don’t happen automatically.</p>
<h3 id="heading-single-server-for-multiple-clients">Single Server for Multiple Clients</h3>
<p>By creating custom libraries for our Shopify app, we are now serving multiple clients via a single server, which obviously comes with a lot of operational and management advantages.</p>
<h3 id="heading-multilingual-shopify-stores">Multilingual Shopify Stores</h3>
<p>Being able to regulate the context object, we are now enabling organizations to use the same admin account for managing multiple stores that use multiple languages.</p>
<h2 id="heading-looking-ahead">Looking Ahead</h2>
<p>Every new undertaking in coding comes with a slew of learnings and this one also brought many of them to our team. We plan to leverage them across multiple scenarios and encourage our fellow developers to follow the suite.</p>
<p>Experiential learning impacts in the Dev community can spur a huge wave of transformation that benefits masses at a large scale.</p>
<p>And, at Shipsy, we believe in driving such changes and welcome the change drivers with an open embrace and an excellent Dev culture. </p>
<p>Wish to experience the wave of change? Get onboard with Shipsy and get started with your journey with our Dev Team.</p>
]]></content:encoded></item><item><title><![CDATA[Exploring Web and PhonePe Switch with Flutter]]></title><description><![CDATA[By  Kushal Jain  and  Daksh Pokar 
Mobile technology evolved by leaps and bounds in the last decade, and along with it came skyrocketing demand for building apps for archaic company websites. Soon businesses realized the true potential of cloud-based...]]></description><link>https://engineering.shipsy.io/exploring-web-and-phonepe-switch-with-flutter</link><guid isPermaLink="true">https://engineering.shipsy.io/exploring-web-and-phonepe-switch-with-flutter</guid><category><![CDATA[Flutter]]></category><category><![CDATA[PWA]]></category><category><![CDATA[sdk]]></category><dc:creator><![CDATA[Shipsy Engineering]]></dc:creator><pubDate>Tue, 14 Sep 2021 11:09:14 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1631616992122/xBIrdbUkV.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>By  <a target="_blank" href="https://www.linkedin.com/in/kushaljain89/?originalSubdomain=in">Kushal Jain</a>  and  <a target="_blank" href="https://www.linkedin.com/in/dakshpokar/?originalSubdomain=in">Daksh Pokar</a> </p>
<p>Mobile technology evolved by leaps and bounds in the last decade, and along with it came skyrocketing demand for building apps for archaic company websites. Soon businesses realized the true potential of cloud-based apps with regards to scalability and supporting growing traffic volumes of their online business. Result? Millions of apps on the Play Store, App Store, and more.</p>
<p>However, apps have one major disadvantage in contrast to traditional websites. It is the ease of access. Any web browser can easily access a website by accessing its URL. But when it comes to using an app, one needs to have access to an Application Store. Along with that emerges the need to have an account. In other words, in App Store, one needs an Apple ID and a Google Account for Play Store. </p>
<p>So, is it a good idea then to have a website on a mobile device? It’s also not an ideal approach. A mobile website tends to be slow and it’s not adaptive to ever-evolving mobile screens. According to  <a target="_blank" href="https://blog.kissmetrics.com/wp-content/uploads/2011/04/loading-time.pdf">KissMetrics</a>, Out of 100 people visiting a website, 40 abandon it just because it takes too much time to load. Just imagine the revenue that is lost because of website downtime. Research  <a target="_blank" href="https://www.alertra.com/blog/how-much-business-will-your-company-lose-during-website-outage">highlights</a>  that just 15 minutes worth of website downtime can cost businesses $12,495 in lost profits. No wonder the man below is so angry!</p>
<p><img src="https://c.tenor.com/JIS_KDKKsgYAAAAd/guaton-computadora.gif" alt="Gif description" /></p>
<p>These usual problems with traditional websites and accessibility constraints of apps were enough to fuel the next major evolution in the app industry. As a result, the world was introduced to PWAs (Progressive Web Apps).</p>
<h2 id="what-are-pwas">What are PWAs?</h2>
<p>PWAs are simple web apps built using common web technologies like HTML, CSS, and JavaScript. The idea of PWA is not new, and it was first introduced by Steve Jobs. For the first time, he presented the concept in front of the world during the iPhone introduction in 2007. However, the term “PWA” is recent and was coined by Google Chrome developer Alex Russel and designer Frances Berriman in an  <a target="_blank" href="https://www.google.com/url?q=https://medium.com/@slightlylate/progressive-apps-escaping-tabs-without-losing-our-soul-3b93a8561955&amp;sa=D&amp;source=editors&amp;ust=1631258159992000&amp;usg=AOvVaw1dvKFrn1nReVTMDlhPQCz0">article</a> in 2015. </p>
<p>PWAs are faster, light-weight, easily extensible, and even work offline. The big advantage of PWAs is that they can be easily found on the web and can work on any mobile device that supports a web browser. </p>
<p>Also Read:  <a target="_blank" href="https://www.thinkwithgoogle.com/marketing-strategies/app-and-mobile/progressive-web-apps-benefit-brands/">A Progressive Web App Might Be Right for Your Brand</a>  </p>
<p>In India, where mobile data speeds are still limited, it made sense for companies to adopt PWAs and PhonePe. Considering the potential opportunity in delivering the PWAs, it launched its microapps platform called PhonePe Switch.</p>
<h2 id="what-is-phonepe-switch">What is PhonePe Switch?</h2>
<p>PhonePe Switch is a platform that hosts a vast collection of microapps with over 300 million active users. It reduces customer acquisition costs for any company by providing a user with one-click access to hundreds of apps and seamless integration with its secure payments channel. Existing mobile sites or even a PWA can easily be hosted on the platform by easy-to-follow guidelines provided in this  <a target="_blank" href="https://developer.phonepe.com/v4/docs/introduction-to-switch">documentation</a>. </p>
<p>Looking at this opportunity we decided to host our Flutter App on the platform, considering that Flutter now supports building PWAs with the introduction of  <a target="_blank" href="https://flutter.dev/web">Flutter Web</a>.</p>
<p>We ran our flutter app on chrome and we came across the following challenges:</p>
<ol>
<li>Routes are hashed by default.
For ex: localhost54724. These routes create problems in reading the query parameters and we were planning to use them</li>
<li>Some packages like geocoder and place picker didn’t support web platform</li>
<li>Image uploading and some other I/O operations have different implementations in web and mobile
For different implementations, we had to separate the code for image upload for web and mobile platforms</li>
</ol>
<h4 id="solution">Solution:</h4>
<p>The above challenges can be solved if we make a web-specific service and encapsulate all the web-specific implementations there (same for mobile):</p>
<p>Define an interface for IOService containing all the methods to be implemented</p>
<pre><code><span class="hljs-keyword">abstract</span> <span class="hljs-class"><span class="hljs-keyword">class</span> <span class="hljs-title">IOService</span> </span>{}
</code></pre><p>Implement IOServiceMobile and IOServiceWeb from the interface and define platform-specific code</p>
<pre><code><span class="hljs-class"><span class="hljs-keyword">class</span> <span class="hljs-title">IOServiceMobile</span> <span class="hljs-keyword">implements</span> <span class="hljs-title">IOService</span> </span>{}
<span class="hljs-class"><span class="hljs-keyword">class</span> <span class="hljs-title">IOServiceWeb</span> <span class="hljs-keyword">implements</span> <span class="hljs-title">IOService</span> </span>{}
</code></pre><p>Conditionally import the service</p>
<pre><code><span class="hljs-keyword">import</span> <span class="hljs-string">'web_service_mobile.dart'</span> <span class="hljs-keyword">if</span> (dart.<span class="hljs-keyword">library</span>.js) <span class="hljs-string">'web_service_web.dart'</span>;
</code></pre><p>In the web service constructor, we can change the path URL strategy to solve our challenge for hashed routes (challenge no 1). For different implementations in web and mobile for the same logic (challenge no 2 and 3), we can implement package-specific methods and separate implementations for mobile and web, so the conditional import will execute the specific method from the platform-specific service.</p>
<p>Now our PWA is running, so we deployed it on our AWS ec2 instance. The next step is to make it work inside the PhonePe switch</p>
<p>According to  <a target="_blank" href="https://developer.phonepe.com/v4/docs">PhonePe switch</a>  docs, we have the following requirements:</p>
<ol>
<li>Deployed PWA</li>
<li>Pre Prod  <a target="_blank" href="https://www.google.com/url?q=https://docs.phonepe.com/public/GWP0KnUB-p02LmDCBYJw&amp;sa=D&amp;source=editors&amp;ust=1631258277803000&amp;usg=AOvVaw2GpHG1xEU1Nc63gE2hj-h9">APK</a>  for PhonePe</li>
<li>Unique ID from PhonePe</li>
</ol>
<p>Also if you want your app to work on a switch platform, you need:</p>
<h3 id="1-sso-login-feature">1. SSO login feature</h3>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1631254034493/QzDHpn6Y_.png" alt="image3.png" /></p>
<p>On the login screen, you should get a prompt for logging in via PhonePe  SSO. If the user skips the login, then normal login can be done.</p>
<h3 id="2-payment-using-phonepe">2. Payment using PhonePe</h3>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1631254073629/iLjHb7nJy.png" alt="image2.png" /></p>
<p>When a user makes a payment, the PhonePe payment screen has to be shown directly.</p>
<p>To implement the PhonePe switch, we implemented a PhonePeService for both mobile and web. Just like IOService and implemented the PhonePe JS SDK using our dart code. This also took some time to do but was achievable. We managed to load the SDK using JS wrappers in dart,  Futures, and Completers.</p>
<pre><code>  Future&lt;<span class="hljs-keyword">void</span>&gt; loadSdk() {
   <span class="hljs-keyword">final</span> Completer _completer = <span class="hljs-keyword">new</span> Completer();
   <span class="hljs-keyword">if</span> (isPhonePeSwitchPlatform()) {
     <span class="hljs-keyword">var</span> head = <span class="hljs-built_in">document</span>.getElementsByTagName(<span class="hljs-string">'head'</span>)[<span class="hljs-number">0</span>];
     <span class="hljs-keyword">var</span> jsScript = <span class="hljs-built_in">document</span>.createElement(<span class="hljs-string">"script"</span>);

     jsScript.attributes.addAll({
       <span class="hljs-string">'type'</span>: <span class="hljs-string">'text/javascript'</span>,
       <span class="hljs-string">'src'</span>: <span class="hljs-string">'scripts/phonepe-sdk.js'</span>,
     });

     jsScript.addEventListener(<span class="hljs-string">'load'</span>, (event) {
       <span class="hljs-built_in">window</span>.console.log(<span class="hljs-string">'[PHONEPE] Script Loaded'</span>);
       _initializeSdk();
       _completer.complete();
     });

     head.append(jsScript);
   }
   <span class="hljs-keyword">return</span> _completer.future;
 }

 Future&lt;<span class="hljs-keyword">void</span>&gt; _initializeSdk() <span class="hljs-keyword">async</span> {
   <span class="hljs-keyword">var</span> phonePeParent = jsUtil.getProperty(<span class="hljs-built_in">window</span>, <span class="hljs-string">'PhonePe'</span>);
   <span class="hljs-keyword">var</span> phonePe = jsUtil.getProperty(phonePeParent, <span class="hljs-string">'PhonePe'</span>);
   <span class="hljs-keyword">var</span> constants = jsUtil.getProperty(phonePeParent, <span class="hljs-string">'Constants'</span>);
   <span class="hljs-keyword">var</span> species = jsUtil.getProperty(constants, <span class="hljs-string">'Species'</span>);
   jsUtil.setProperty(phonePe, <span class="hljs-string">'loggingEnabled'</span>, <span class="hljs-keyword">true</span>);
   <span class="hljs-built_in">window</span>.console.log(<span class="hljs-string">'[PHONEPE] LOGGING ENABLED'</span>);
   _sdk = <span class="hljs-keyword">await</span> jsUtil.promiseToFuture(jsUtil
       .callMethod(phonePe, <span class="hljs-string">'build'</span>, [jsUtil.getProperty(species, <span class="hljs-string">'web'</span>)]));
   <span class="hljs-built_in">window</span>.console.log(<span class="hljs-string">'[PHONEPE] SDK INIT DONE'</span>);
 }
</code></pre><p>Window and document are imported from dart:html and  jsUtil is imported from dart:js_util package as:</p>
<pre><code><span class="hljs-keyword">import</span> <span class="hljs-string">'dart:js_util'</span> <span class="hljs-keyword">as</span> jsUtil;
</code></pre><p>Similarly, we managed to achieve the SDK methods implementation by wrapping them up like this.</p>
<p>The web application can be done using flutter and is not that tedious. Try it out yourself. 
If you love to solve complex engineering problems like this one, we would love to know you better. Check out the open job positions  <a target="_blank" href="https://shipsy.io/career/">here</a> , we would like to start engaging.</p>
]]></content:encoded></item><item><title><![CDATA[Scaling Infrastructure as Code - Terraform Learnings]]></title><description><![CDATA[By  Pankaj Dhariwal 
At Shispy we empower global businesses to optimize, automate, track and simplify end-to-end logistics and supply chain operations using our smart logistics management platforms. To enable this, we are handling complex cloud infra...]]></description><link>https://engineering.shipsy.io/scaling-infrastructure-as-code-terraform-learnings</link><guid isPermaLink="true">https://engineering.shipsy.io/scaling-infrastructure-as-code-terraform-learnings</guid><category><![CDATA[Terraform]]></category><category><![CDATA[Cloud Computing]]></category><category><![CDATA[AWS]]></category><category><![CDATA[coding]]></category><category><![CDATA[infrastructure]]></category><dc:creator><![CDATA[Shipsy Engineering]]></dc:creator><pubDate>Thu, 26 Aug 2021 11:38:18 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1629786664301/rIsc2K3CP.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>By  <a target="_blank" href="https://www.linkedin.com/in/pankaj-dhariwal">Pankaj Dhariwal</a> </p>
<p>At Shispy we empower global businesses to optimize, automate, track and simplify end-to-end logistics and supply chain operations using our smart logistics management platforms. To enable this, we are handling complex cloud infrastructure at a massive scale.</p>
<p>We soon realized that it would be challenging for the infrastructure team to create, configure and handle the cloud infrastructure manually. So, the logical conclusion was  <a target="_blank" href="https://en.wikipedia.org/wiki/Infrastructure_as_code">Infrastructure as Code</a>. We started using  <a target="_blank" href="https://www.terraform.io/">Terraform </a> as our tool for Infrastructure as Code implementation. </p>
<p>In this article, we will explain the processes that we created for easy usability and scalability. Innovation and first principles thinking were key factors that empowered us to address scalability issues.</p>
<h2 id="1-state-file-management"><strong>1. State File Management</strong></h2>
<p>Terraform keeps a state file (with the name terraform.tfstate in the folder where you run <code>terraform apply</code>) in which it stores the current state for infrastructure resources being managed by it. Whenever we run <code>terraform plan</code> or <code>terraform apply</code>, it compares our code with the data in the state file and tries to sync them. Now, this introduced us to the following issues when terraform was used in the production.</p>
<ul>
<li><p>Every member of the team needed the updated copy of the state file for creating the infrastructure</p>
</li>
<li><p>Resolve conflicts when multiple team members are trying to update the state file</p>
</li>
<li><p>Reverting back to the previous version in case of a big blunder was a mess</p>
</li>
</ul>
<p>Soon we realized one important thing we need to do is keep the state file at a remote location where anyone working on the infrastructure can access it. This removed the need to keep different copies of the state file with each infrastructure developer. Additionally, we needed the locking facility so that simultaneous updates to the state files do not cause headaches for us.</p>
<p>Fortunately, terraform indigenously supports this via  <a target="_blank" href="https://www.terraform.io/docs/language/settings/backends/index.html">Terraform Backend</a>. Terraform Backend simply controls how the terraform stores and loads the state. Terraform supports many backends such as Amazon S3, Google Cloud Storage, Terraform Enterprise, etc. Almost all the remote backends also give us the facility of locks. Terraform will get the lock on the state file before running <code>terraform apply</code>, all other requests for the file will have to wait until this operation is either completed or aborted. </p>
<p>We chose Amazon S3 in combination with a dynamoDB table for locking state files. Further, we enabled versioning on our S3 bucket. So, now we have all the state files and can switch back to any of them almost instantaneously. After creating the S3 bucket and the dynamoDB table we just added the following code to enable the Terraform Backend:- </p>
<pre><code><span class="hljs-section">terraform</span> {
 <span class="hljs-attribute">backend</span> <span class="hljs-string">"s3"</span> {
   <span class="hljs-attribute">bucket</span> = <span class="hljs-string">"&lt;your bucket name&gt;"</span>
   key    = <span class="hljs-string">"terraform.tfstate"</span>
   region = <span class="hljs-string">"&lt;region of your s3 bucket&gt;"</span>

   dynamodb_table = <span class="hljs-string">"&lt;name of the dynamoDB table&gt;"</span>
   encrypt        = <span class="hljs-literal">true</span>
 }
}
</code></pre><h2 id="2-modules-dryhttpsenwikipediaorgwikidon27trepeatyourself"><strong>2. Modules ( <a target="_blank" href="https://en.wikipedia.org/wiki/Don%27t_repeat_yourself">DRY</a> )</strong></h2>
<p>We realized a lot of our terraform code was repetitive. For instance, the terraform code for creating a new AWS Batch service was almost the same for dev, staging, and production environments. Even in the same environment, there were a lot of common steps for creating different Batch jobs.</p>
<p>In other programming languages, we usually create a function for the repetitive code and then just call it from all the places. Similarly, Terraform provides us  <a target="_blank" href="https://learn.hashicorp.com/tutorials/terraform/module">Terraform Modules</a>, which can be written once and then called multiple times. </p>
<pre><code>resource <span class="hljs-string">"aws_batch_job_definition"</span> <span class="hljs-string">"main"</span> {
   name = local.batch_job_name
   type = <span class="hljs-keyword">var</span>.job_type
   platform_capabilities = <span class="hljs-keyword">var</span>.platform_capabilities
   container_properties = jsonencode({
       command = <span class="hljs-keyword">var</span>.command
       image = local.container_image
       jobRoleArn = <span class="hljs-keyword">var</span>.job_role_arn
       memory = <span class="hljs-keyword">var</span>.memory
       vcpus = <span class="hljs-keyword">var</span>.vcpus
       environment  = local.environment
   })

   tags = local.final_tag
}

<span class="hljs-comment">// schedule the job</span>

resource <span class="hljs-string">"aws_cloudwatch_event_rule"</span> <span class="hljs-string">"event_rule"</span> {
   name                = <span class="hljs-string">"<span class="hljs-subst">${title(local.environment_type)}</span>Schedule_<span class="hljs-subst">${var.name}</span>"</span>
   description         = <span class="hljs-string">"Cloudwatch Rule for <span class="hljs-subst">${local.batch_job_name}</span>"</span>
   schedule_expression = <span class="hljs-string">"cron(<span class="hljs-subst">${var.cron_parameter}</span>)"</span>
}

resource <span class="hljs-string">"aws_cloudwatch_event_target"</span> <span class="hljs-string">"event_target"</span> {
 rule      = aws_cloudwatch_event_rule.event_rule.name
 batch_target {
   job_definition = aws_batch_job_definition.main.arn
   job_name = local.batch_job_name
 }
 role_arn = <span class="hljs-keyword">var</span>.cloudwatch_event_job_role_arn

 <span class="hljs-comment">// get the arn of the queue based on the environment</span>
 arn = lookup(local.batch_queue_arn, terraform.workspace)
}
</code></pre><p>Now the above code can be called anytime we need to create an AWS Batch job. </p>
<pre><code><span class="hljs-attribute">module</span> <span class="hljs-string">"batch_job"</span> {
 <span class="hljs-attribute">source</span> = <span class="hljs-string">"../../../../modules/aws-batch"</span>

 name           = <span class="hljs-string">"test-script"</span>
 vcpus          = <span class="hljs-number">1</span>
 memory         = <span class="hljs-number">256</span>
 cron_parameter = <span class="hljs-string">"30 4 ? * MON *"</span>
}
</code></pre><p>Additionally, it abstracted out a lot of implementation details. Now, all our engineers write the script's name, resource requirements, and the cron parameters. This has reduced the script deployment time after production merges from 3 hours to 5 mins. We have written modules for all the infrastructure components.</p>
<h2 id="3-environment-isolation"><strong>3. Environment Isolation </strong></h2>
<p>As an organization, our philosophy is of learning through experimentation. So almost all the engineers perform numerous experiments which sometimes cause issues. In order to allow these experiments and keep our systems healthy, we have isolated our environments. Ideally, we wanted our dev, staging, and production infrastructure and the corresponding state file to be fairly isolated. This isolation can be achieved in two ways. One is Terraform Workspaces and the other is the file structure of your terraform code. </p>
<h3 id="31-terraform-workspace"><strong>3.1 Terraform Workspace</strong></h3>
<p>Terraform workspaces provide us separate state maintenance. The most interesting aspect is, terraform creates separate state files for each workspace. Initially, you start in the “default” workspace. 
With the command <code>terraform workspace new &lt;workspace name&gt;</code> you can create a new workspace. Then terraform will automatically create an “env:” folder and will put the state file for this workspace in the AWS S3 bucket we are using for state files. You can switch workspace using <code>terraform workspace select &lt;workspace name&gt;</code></p>
<p>Leveraging this information, we created three different workspaces, one for each dev, demo, and production. So now we are sure that any change in the dev or staging environment will not impact our prod state file and hence production infrastructure.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1629452408736/kqT6V0uwS.png" alt="Screenshot 2021-08-20 at 3.09.35 PM.png" /></p>
<h3 id="32-file-structure"><strong>3.2 File Structure</strong></h3>
<p>Terraform Workspaces are a good way to achieve isolation. However, they are almost invisible at the code level. This caused many bugs since the developer didn't know whether the code will run for production or staging environments. Therefore, we have separate folders for each environment and then each resource type has its own folder. This brings out greater transparency for the developers and reduces the chances of wrong commits (i.e. code for staging pushed to production).</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1629441384882/YR0_hePpr.png" alt="Screenshot 2021-08-20 at 12.06.10 PM.png" /></p>
<h2 id="4-deployment-pipelines"><strong>4. Deployment Pipelines</strong></h2>
<p>Creating infrastructure from developers' local machines using <code>terraform apply</code> initially worked smoothly. But as we scaled we realized this process had some limitations, like:- </p>
<ul>
<li><p><strong>Opaque</strong>:- anyone could run the terraform apply, even sometimes without the PR review</p>
</li>
<li><p><strong>Scalability</strong>:- all developers changing infrastructure at the same time created chaos</p>
</li>
<li><strong>Permission Management</strong>:- everyone needed almost full write permission which could have been catastrophic </li>
</ul>
<p>So we decided to create  <a target="_blank" href="https://www.jenkins.io/">Jenkins</a>  Pipelines for the infrastructure creation using terraform. Our flow now involves:-</p>
<ul>
<li><p>Creating a branch from the production branch</p>
</li>
<li><p>Making your changes</p>
</li>
<li><p>Running a Jenkins pipeline to create a plan (output will show the changes that will happen in the infrastructure if this PR is merged)</p>
</li>
<li><p>Creating a PR request and putting the link of the above run pipeline in the description. This makes the reviewer's life easy as now they know what will happen if this PR is merged.</p>
</li>
<li><p>After the production merge, we run the Jenkins deployment. Our Jenkins deployment has an approval step. You need to read the plan and if you are satisfied with the plan then only the job will be complete.</p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1629445711733/U-DiBbWgj.png" alt="Screenshot 2021-08-20 at 1.18.15 PM.png" /></p>
<ul>
<li>Created a user for Jenkins in the cloud provider and giving only this user the access to create infrastructure. This ensures that there is only one point of infrastructure modification across the organization</li>
</ul>
<pre><code>pipeline {
    agent {
        node {
            label <span class="hljs-string">'master'</span>
        }
    }
    stages {
        stage(<span class="hljs-string">"initialize terraform"</span>) {
            steps {    
                sh <span class="hljs-string">"terraform init"</span>
            }
        }

        stage(<span class="hljs-string">"switch the workspace"</span>) {
            steps {
                sh <span class="hljs-string">"terraform workspace select <span class="hljs-subst">${env_type}</span> || terraform workspace new <span class="hljs-subst">${env_type}</span>"</span>
            }
        }

        stage(<span class="hljs-string">"validate the config"</span>) {
            steps {
                sh <span class="hljs-string">"terraform validate"</span>
            }
        }

        stage(<span class="hljs-string">"create the plan"</span>) {
            steps {
                sh <span class="hljs-string">"terraform plan"</span>
            }
        }

        stage(<span class="hljs-string">'Approval'</span>) {
            steps {
                script {
                    approver  = input(id: <span class="hljs-string">'confirm'</span>, message: <span class="hljs-string">'Apply Terraform ?'</span>, submitter: <span class="hljs-string">''</span>, submitterParameter: <span class="hljs-string">'submitter'</span>)
                }
            }
        }

        stage(<span class="hljs-string">"apply the plan"</span>) {
            steps {
                sh <span class="hljs-string">"terraform apply -auto-approve"</span>
            }
        }
    }
}
</code></pre><h2 id="5-refactoring"><strong>5. Refactoring</strong></h2>
<p>Continuous refactoring of the code for improving readability and the hygiene of the code is the inevitable truth of modern software development. However, refactoring in terraform is a little tricky since this will make your code out of sync with the state file. Result? Unexpected changes to your infrastructure even leading to downtime in the worst case.</p>
<p>What's the solution then? Terraform provides few CLI commands for the management of the state file. One of them is <code>terraform state mv &lt;source&gt; &lt;destination&gt;</code> (.i.e terraform state move). Using this command we can edit the state file and make it in sync with our new code without changing any infrastructure. </p>
<pre><code>terraform <span class="hljs-keyword">state</span> mv <span class="hljs-string">'module.create_app_clients.raven'</span> <span class="hljs-string">'module.create_app_clients.raven-main'</span>
</code></pre><p>Interestingly, you can run <code>terraform state mv -dry-run</code> which won't change anything but will let you know what will happen if you move the state.</p>
<p>Even for the CLI commands, we have separate Jenkins pipelines. After your PR is merged in the production branch, the pipeline will run with the appropriate commands.</p>
<p>If you love to solve complex engineering problems like this one, we are hiring. Check out the open job positions  <a target="_blank" href="https://shipsy.io/career/">here</a>.</p>
]]></content:encoded></item><item><title><![CDATA[Remote Software Development Environment]]></title><description><![CDATA[By  Aman Ruhela  &  Ayush Saxena 
You might have often found yourself struggling to set up new projects owing to scattered information and multiple dependencies, making it difficult for a new hire (developer) to understand which version to use. Then ...]]></description><link>https://engineering.shipsy.io/remote-software-development-environment</link><guid isPermaLink="true">https://engineering.shipsy.io/remote-software-development-environment</guid><category><![CDATA[Git]]></category><category><![CDATA[ssh]]></category><category><![CDATA[Cloud Computing]]></category><category><![CDATA[Testing]]></category><category><![CDATA[Visual Studio Code]]></category><dc:creator><![CDATA[Shipsy Engineering]]></dc:creator><pubDate>Tue, 17 Aug 2021 11:01:18 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1629182241040/OJTW7D4XK.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>By  <a target="_blank" href="https://www.linkedin.com/in/amanruhela/">Aman Ruhela</a>  &amp;  <a target="_blank" href="https://www.linkedin.com/in/ayush-saxena-4401b1206/">Ayush Saxena</a> </em></p>
<p>You might have often found yourself struggling to set up new projects owing to scattered information and multiple dependencies, making it difficult for a new hire (developer) to understand which version to use. Then issues like inadequate CPU power of laptops slowing down software development processes makes it difficult to build more than one project at a time. Result? Massive productivity loss. </p>
<p>Another significant challenge that we frequently face is driving integration testing between different services. More often than not, all the integration issues are actually discovered post-development branch mergers. This is a very reactive approach to development and we wanted to change that.</p>
<p>Existing cloud technologies have made it extremely easy to get powerful remote machines at the click of a button at very economical rates. Our goal was to leverage cloud-based systems to shoulder heavy compute and storage requirements and enhance developers' productivity.</p>
<h2 id="heres-how-we-achieved-our-goal-in-7-easy-steps">Here’s How We Achieved Our Goal in 7 Easy Steps</h2>
<h3 id="step-1-selecting-vms">Step 1: Selecting VMs</h3>
<p>We selected powerful virtual machines from AWS (you can choose any cloud provider e.g. Azure, GCP, etc.) and granted developers access to the same based on their team and project. </p>
<h3 id="step-2-creating-a-user-on-the-vm">Step 2:  Creating A User on The VM</h3>
<p>Now whenever a developer joins a team, they fill a google sheet with their Bitbucket username, public SSH key, and the projects they are working on. This triggers a script that creates a user on the virtual machine. </p>
<p>Then we made a folder for this user and give them access to that folder only. The script also adds the user's SSH key to the 'authorized_keys' file, enabling the user to connect with the remote machine seamlessly.</p>
<pre><code>add_user_with_ssh_key() {
   adduser --disabled-password --gecos <span class="hljs-string">"<span class="hljs-subst">${full_name}</span>,,,"</span> --quiet ${username}
   <span class="hljs-keyword">mkdir</span> /home/${username}/.ssh
   touch /home/${username}/.ssh/authorized_keys
   echo <span class="hljs-string">"$ssh_key"</span> &gt; <span class="hljs-regexp">/home/</span>${username}/.ssh/authorized_keys
   <span class="hljs-keyword">chown</span> -R ${username}:${username} /home/${username}/.ssh
}
</code></pre><h3 id="step-3-restricting-cpu-utilization">Step 3: Restricting CPU Utilization</h3>
<p>An imbalance in resource utilization can pose challenges when implementing this strategy. But we figured out a way to address it. To ensure one particular user does not exhaust all the resources, we created a separate <a target="_blank" href="https://en.wikipedia.org/wiki/Cgroups">cgroup</a> for each user (cgroups in simple terms allow you to restrict how much resource users in a particular cgroup can use). This ensures that isolation between the users is maintained. In other words, there is zero interference between activities done by concurrent users of a machine.</p>
<p>For more detailed information on cfs_quota_us, cfs_period_us and limit_in_bytes refer  <a target="_blank" href="https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/resource_management_guide/sec-cpu">resource_management_guide</a>.</p>
<pre><code><span class="hljs-function"><span class="hljs-title">add_cgroup_configs</span></span>() {
   <span class="hljs-built_in">echo</span> -e <span class="hljs-string">"group <span class="hljs-variable">${username}</span> {
    cpu {
        cpu.cfs_quota_us=20000;
    }
    memory {
        memory.limit_in_bytes = 10240m;
    }
   }"</span> &gt;&gt; /etc/cgconfig.conf
   <span class="hljs-built_in">echo</span> -e <span class="hljs-string">"\n <span class="hljs-variable">${username}</span> cpu,memory <span class="hljs-variable">${username}</span>"</span> &gt;&gt; /etc/cgrules.conf
}

<span class="hljs-function"><span class="hljs-title">restart_cgroups_services</span></span>() {
   systemctl restart cgrulesgend.service
   systemctl restart cgconfigparser.service
}
</code></pre><h3 id="step-4-cloning-repositories">Step 4:  Cloning Repositories</h3>
<p>Then we created the SSH key pair and gave READ access to all repositories. This allowed us to clone the repositories of all the said user's projects in the 'user folder.' Post cloning, we changed the config file in the .git folder ensuring that any future commits/pushes are done on that particular user's behalf only.</p>
<p>The script will further do the following things:- </p>
<ul>
<li><p>Install all the dependencies of the projects</p>
</li>
<li><p>Install basic software packages like git, node, python, docker, etc</p>
</li>
<li><p>Copy the configuration files for the projects</p>
</li>
</ul>
<p>So, now the project is ready to be used on a virtual machine. But the question is how to make it easily accessible and simple to work with for the developers? Luckily the good guys at  <a target="_blank" href="https://marketplace.visualstudio.com/items?itemName=ms-vscode-remote.vscode-remote-extensionpack&amp;ssr=false#overview">Remote Development</a>  have developed a great VS Code plugin to solve this problem. </p>
<pre><code><span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">setup_project</span>(<span class="hljs-params">username, projectName, githubID</span>):</span>
   <span class="hljs-keyword">try</span>:
       bitbucketCreds = bitbucket_config()

       <span class="hljs-keyword">print</span>
       gitCloneCommand=<span class="hljs-string">f"""
           git clone https://<span class="hljs-subst">{bitbucketCreds[<span class="hljs-string">'username'</span>]}</span>:<span class="hljs-subst">{bitbucketCreds[<span class="hljs-string">'password'</span>]}</span>@bitbucket.org/dhr00v/<span class="hljs-subst">{projectName}</span>.git /home/<span class="hljs-subst">{username}</span>/<span class="hljs-subst">{projectName}</span>
       """</span>
       run_system_command(gitCloneCommand, <span class="hljs-string">"Could not clone repository"</span>)
       changeLocalRepoOwnership=<span class="hljs-string">f"""
           chown -R <span class="hljs-subst">{username}</span>:<span class="hljs-subst">{username}</span> /home/<span class="hljs-subst">{username}</span>/<span class="hljs-subst">{projectName}</span>
       """</span>
       run_system_command(changeLocalRepoOwnership, <span class="hljs-string">"Unable to change local repo ownership"</span>)
       gitChangeUserCommand=<span class="hljs-string">f"""
           sudo -H -u <span class="hljs-subst">{username}</span> bash -c "cd /home/<span class="hljs-subst">{username}</span>/<span class="hljs-subst">{projectName}</span> &amp;&amp; git remote remove origin &amp;&amp; git remote add origin https://<span class="hljs-subst">{githubID}</span>@bitbucket.org/&lt;yourbitbucketaccountname&gt;/<span class="hljs-subst">{projectName}</span>.git"
       """</span>
       run_system_command(gitChangeUserCommand, <span class="hljs-string">"Unable to change email"</span>)
       print(<span class="hljs-string">"Base Project Setup Completed"</span>)
       fetch_config_from_system(username, projectName)

   <span class="hljs-keyword">except</span> Exception <span class="hljs-keyword">as</span> e:
       print(e)
</code></pre><h3 id="step-5-installing-a-simple-plugin">Step 5: Installing A Simple Plugin</h3>
<p>You need to install the plugin, do a common palette search for "Remote SSH:Open configuration file" and enter your details. </p>
<pre><code>Host &lt;your VM’s DNS/IP&gt;
 HostName &lt;your VM’s DNS/IP&gt;
 <span class="hljs-keyword">User</span> &lt;your <span class="hljs-keyword">user</span> <span class="hljs-type">name</span> <span class="hljs-keyword">on</span> the VM&gt;
 ForwardAgent yes
 IdentityFile &lt;<span class="hljs-keyword">location</span> <span class="hljs-keyword">of</span> <span class="hljs-built_in">public</span> ssh key file <span class="hljs-keyword">on</span> your computer&gt;
</code></pre><h3 id="step-6-connecting-vs-code-with-the-vm">Step 6: Connecting VS Code with the VM</h3>
<p>After that,  click on the Remote Development button, the one you will see on the left panel, to connect with the machine. Now your VS Code is connected to the virtual machine on the cloud.  After these steps, even though it will appear as if a developer is working on her local system, in reality, it's the powerful cloud-based virtual machine she is working on.</p>
<h3 id="step-7-executing-integration-testing">Step 7: Executing Integration Testing</h3>
<p>With all your developers working on the same machine, a back-end engineer can run the server on the VS Code (which is actually running on the virtual machine) and give the port number to the corresponding front-end engineer for integration testing. One does not need to first get their branches merged and deployed. When both the engineers are satisfied, then their PRs can be merged. This will significantly reduce the development time.</p>
<p>In a nutshell, at Shipsy, we are seamlessly scaling development and testing environments irrespective of a developer's base location and laptop capabilities. In other words, our engineering teams' work is powered by the cloud, but from a developer's perspective, they are just working VS Codes on their own laptops.</p>
<p>If you love to solve complex engineering problems like this one, we are hiring. Check out the open job positions  <a target="_blank" href="https://shipsy.io/about-career/">here </a>.</p>
]]></content:encoded></item></channel></rss>