Adopting Edge Computing for Web Apps – 4 Key Enablers

In recent years, the adoption of Internet-connected devices has grown exponentially and will not slow down in the coming years. According to Gartner, by 2023, the average CIO will be responsible for more than three times as many endpoints as they managed in 2018. However, supporting such an increase would require an expansion of cloud infrastructure and a substantial provision of network capacity, which can not be enough. be economically feasible.

In such cases, edge computing could emerge as a solution, as the necessary resources such as compute, storage, and network can be provided closer to the data source for processing.

Businesses are looking for information that is near real-time and actionable, which is driving the adoption of edge computing across industries. The benefits of Edge Computing are well known, and in a previous article, I illustrated the benefits and some use cases.

Adoption of Edge Computing in the development of web applications

It’s only a matter of time before edge goes mainstream, as evidenced by a recent IDC survey that found that 73% of respondents chose edge computing as a strategic investment. The open source community, cloud providers, and telecom service providers are working to strengthen the edge computing ecosystem, accelerating its adoption and the pace of innovation.

With such tailwinds in their favor, web application developers should focus on having an Edge adoption plan in order to be more agile and take advantage of Edge’s ability to improve user engagement rate.

Benefits such as near real-time insights with low latency and reduced cloud server bandwidth usage bolster the adoption of edge computing across industries for web applications. Adopting an edge computing architecture for website applications can increase productivity, lower costs, save bandwidth, and create new revenue streams.

I discovered that there are four critical enablers for edge computing that help web developers and architects get up and running.

1. Ensure application agility with the right application architecture

The edge ecosystem comprises multiple components like appliances, gateways, edge servers or edge nodes, cloud servers, etc. For web applications, the edge computing workload must be agile enough to run on the edge ecosystem components, depending on peak load or availability.

However, there could be specific use cases, such as detecting poaching activity via drones in a dense forest with little or no network connectivity, requiring native application development for edge devices or gateways. link.

“Adopting cloud-native architectural patterns, such as microservices or serverless, brings application agility. The definition of cloud native explained by the Cloud Native Computing Foundation (CNCF) supports this argument: ‘“Cloud-native technologies enable organizations to build and run scalable applications across public, private, and hybrid clouds.”

Features like containers, service meshes, microservices, immutable infrastructure, and declarative application programming interfaces (APIs) best illustrate this approach. These features enable loosely coupled systems that are robust, manageable, and observable. They allow engineers to make high-impact changes frequently and with minimal effort.”

The most important step in adopting edge computing would be to use a cloud-native architecture for the application or at least the service to be deployed at the edge.

2. Get the benefits of edge infrastructure and services by adopting CSP

Cloud Service Providers (CSPs) offer services such as compute and storage local to a region or zone, acting as mini/regional data centers managed by CSPs. Applications or services that adhere to the “develop once, deploy everywhere” principle can be easily deployed on this edge infrastructure.

CSPs like AWS (outpost, snowball), Azure (edge ​​zones), GCP (Anthos), and IBM (cloud satellite) have already extended some of their fully managed services to on-premises settings. Start-ups or growth-stage companies can easily take advantage of these hybrid cloud solutions to implement edge solutions faster and for greater security, as they can afford the associated cost.

For an application running on wireless mobile devices that rely on cellular connectivity, the new 5G cellular technology can provide a significant latency benefit. Additionally, CSPs are deploying their compute and storage resources closer to the telecom operator’s network, which mobile applications, such as games or virtual reality, can use to enhance the end-user experience.

3. Leverage custom code execution with CDN

Content delivery networks (CDNs) have distributed points of presence (PoPs) to cache and serve web application content faster. They are rapidly evolving and many PoPs now have a language runtime such as JavaScript (v8), which allows program execution closer to the edge. Additionally, it increases security by migrating client-side program logic to the edge.

Web applications such as online shopping portals can provide a better customer experience with reduced latency when supported by such services. For example, applications can benefit more by moving cookie handling logic to CDN edge processing instead of accessing the origin server. This move could prove effective when there is a big spike in traffic during events like Black Friday and Cyber ​​Monday.

Furthermore, such a method could also be effective for running A/B tests. You can serve a fixed subset of users with an experimental version of the app while giving the rest of the participants a different version.

4. Use open deep learning model formats that provide ML framework interoperability

The diversity of neural network models and model frameworks has multiplied in recent years. This has encouraged developers to use and share neural network models across a broad spectrum of frameworks, tools, runtimes, and compilers. But before running a standard AI/ML model format across various edge devices, developers and entrepreneurs should seek some standardization to counter edge heterogeneity.

Open deep learning model formats such as the Open Neural Network Exchange (ONNX) are emerging as a solution as it supports interoperability for commonly used deep learning frameworks. Provides a mechanism to export models from different frameworks to the ONNX format. ONNX Runtime is available in other languages, including JavaScript. Both models and runtimes are supported on multiple platforms, including low-power devices.

The conventional approach for machine learning applications is to build AI/ML models in a compute-intensive cloud environment and use that model for inference. With JavaScript AI/ML frameworks, it is possible to run inferences in browser-based applications. Some of these frameworks also support training models in the browser or in the JavaScript backend.

The right technology decisions ensure better business values

Working with dozens of startups, I’ve found that the best business decisions sometimes hinge on early adoption of emerging technologies like edge computing for better customer impact.

However, the adoption of emerging technology requires foresight and planning to be successful. By following the enablers above, you will be well positioned for a seamless and sustainable integration of edge computing to develop web-based applications.

Image credit: Ketut Subiyanto; pexels; Thank you!

Pankaj Mendki

Pankaj Mendki is Talentica Software’s Director of Emerging Technology. Pankaj is an IIT Bombay alumnus and researcher exploring and accelerating the adoption of evolving technologies for early stage and growth startups. He has published and presented various research papers on blockchain, edge computing, and IoT at various IEEE and ACM conferences.

Leave a Comment