State of the Edge 2018: defining the edge and its uses

In this article, Edge Research Group:

  • Summarizes the newly released State of the Edge report.
  • Highlights a key use case.
  • Offers an end-user perspective on edge services.

Microsoft and HP Enterprise have recently made headlines by talking about investing billions in products and services for edge computing. But beyond eye-popping dollar figures, there’s a lot of activity in the edge computing market.

With that in mind, Edge Research Group and Structure Research partnered to produce the State of the Edge 2018 report with support from Vapor IOPacketEricsson UDNArm and Rafay Systems. The report covers a lot of ground in our attempt to assess the impact of edge computing on technology providers, telecom operators, ISPs, and most significantly, on developers and end-users.

The inaugural State of the Edge report:

  • Assesses the state of edge computing today.
  • Discusses trends driving the development of the edge computing ecosystem of technologies.
  • Illustrates practical architectures for its deployment.
  • Hypothesizes how a marketplace for highly distributed compute, storage and network resources might function.

Focusing on the end-user perspective

One of the areas the report focuses on is defining terms, and then extending the implications of those definitions into the market. (The glossary is now an open source project under the stewardship of The Linux Foundation).

Defining a view of what edge computing is, the report states:

Edge Computing: The delivery of computing capabilities to the logical extremes of a network in order to improve the performance, operating cost and reliability of applications and services.

This definition allows for edge services to exist at different layers that extend from the ‘core’ or ‘central’ cloud. A few key terms from the that relate to our case study:

  • The infrastructure edge refers to IT resources which are positioned on the network operator or service provider side of the last mile network.
  • The access edge is the part of the infrastructure edge closest to the end user and their devices.
  • The aggregation edge refers to a portion of the edge infrastructure which functions as a point of aggregation for multiple edge data centers deployed at the access edge sublayer.

The aggregation layer is useful in performance terms. At a high level, its purpose is to provide a reduced number of contact points to and from other entities or components in the web application architecture. A CDN, for example, is an entity that can act as an aggregation layer by providing a distributed infrastructure for caching content and performing functions on end user requests before requests are delivered back to a core ‘origin’ infrastructure.

Use cases for edge computing

A number of use cases for edge computing are outlined in the report. One of the use cases worth highlighting here is edge content delivery. Content delivery gets overlooked because of the excitement around topics such as autonomous driving, and perhaps is thought of as a problem already serviced by CDN vendors. They do offer an increasing array of solutions, including secure network and application access, cloud-based security such as web application firewall, and edge compute functions.

Where edge computing goes beyond traditional CDN services is in running customer-defined workloads in edge locations; and while some workloads will use functions that are short-lived, others will be stateful applications.

Our case study (not published in the State of the Edge report) helps illustrate the need for a new generation of edge content delivery services.

Challenge

The customer has a large-scale web and internet software business. Currently, applications are multi-tiered, with a request handling tier made up of high-availability proxy servers and load balancers that takes requests and routes them to a middle tier for further connection management chores before doling requests out to the application server tier.

The “front edge,” is the architecture is used for connection management, according to the customer. In internet services, higher performance establishes reliability and trust, driving more engagement with the consumer of the service.  Performance thus becomes a key indicator of revenue. Security and availability have to be assured as well.

As more services are accessed on mobile devices, the challenge becomes how to have performance, security, and availability together. The devices can’t pre-cache all content; some of it is inherently personalized and context driven.

Solution

While CDN vendors have gone beyond static content delivery and are doing more around connection management, even to the point where most are doing DDoS and edge traffic protection (including bot management), it’s not easy to use these services because these edge services are not well defined-or at least are not defined in any standard way across the vendor landscape.

In the case of mobile access to services, one could move more application logic to an edge location near the user, but much of the statefulness (ie, stored data) of the application still has to be accessed from a database, and the customer states that they don’t envision being able to distribute their databases in any meaningful way any time soon.

The solution then is to aggregate vast numbers of requests and parse those to resources based on proximity. The experience, according to the customer, is rich and personalized but does still have to be done in a “smart way” to stay within performance boundaries. That means paying attention to web application and content loading performance vectors like time to first byte, loading a page above ‘the fold’, the time it takes to fully render a page, and the like.

Future plans and needs

“In companies like ours, even a 5% miss rate is expensive,” says the respondent, meaning that a request isn’t filled by the middle tier and has to go back to the main (or origin) server for data.

What the customer would like to do is run their own logic on a CDN-like set of network and compute resources. “Then you have general purpose computing, and then you can solve all problems.”

The problem is CDN and cloud vendors only have pre-determined functions available for use. Using logic that runs in the nearest cloud datacenter, even if a player like Amazon had thousands of locations, there’s still a big challenge for managing, monitoring, and updating the code. Another challenge is storage persistence – the customer wants to be able to do filtering and aggregation of data at the edge, but there is no way to store and process data at any point in the infrastructure edge.

The customer would like to see what he calls a “dynamic compute edge” that combines a request edge (the first hop from device to infrastructure edge, which would have general purpose processing) along with a “function” edge that’s capable of a minimal level of processing. That minimal level would include intelligent routing that sends the request on the best route to the place where a table lookup can occur, for example.

That kind of edge compute service would enable “whole new applications I haven’t thought of yet,” the customer said.

The State of the Edge 2018 report can be viewed and downloaded at stateoftheedge.com.

Datazoom builds “data delivery network” to support video analytics

Summary:

Video quality and service availability go hand in hand with improving financial performance for online video service providers. Datazoom is a startup that wants to make it easier for these companies to leverage data to improve operations and profitability. The underlying platform has significant potential that extends beyond video analytics into a variety of markets, including IoT.

Key Takeaways

  • Datazoom has funding for a service that aims to make integration of analytics tools easier for online video service providers.
  • Datazoom’s “data delivery network” has potential applications beyond OTT services. The company could move into gaming and IoT applications where latency impacts the ability to gather and analyze large amounts of data, for example.

Company background:

Datazoom was co-founded by CEO Diane Strutner and Jason Thiebeault. Strutner was previously the VP of Global Sales and Business Development at NicePeopleAtWork (NPAW), a provider of a video analytics platform. Thiebeault currently serves as executive director of the Streaming Video Alliance. Michael Skariah, formerly director of engineering at Ooyala, serves as the company’s CTO. The company closed a pre-seed round of $700,000, led by Brooklyn Bridge Ventures.

Details:

As it turns out, the integration process and the merger of data from dozens of disparate sources is a problem that is hard to solve for these companies. To solve the problem, Datazoom has built a data ingest and management platform. Datazoom’s Adaptive Video Logistics platform serves as an abstraction layer pulling data from this customizable SDK into the data ingest platform, enabling service providers to aggregate and time-align data from multiple sources in real-time.

Customers can choose which data needs to be signaled back to a data hub (meaning no wait for a response to an HTTPS request), and make changes or updates to the data collection or collection frequency at any time. Ordinarily, each SDK used for data collection adds significant (minutes) of latency; Datazoom says its method helps the analytics process because data is collected in a more uniform manner with sub 1-second latency (which is covered by an SLA).

Part of the cloud-agnostic infrastructure that enables fast data collection resembles a CDN – one that Datazoom calls a “data delivery network” –is currently hosted on AWS and Google Cloud, with Azure POPs coming soon.  On the other side of the equation, Datazoom has roughly a dozen integrations with data collectors (video and audience analytics tools, ad serving tools and the like), and says it is completing two to three more integrations each week as customer requests come in.

Value proposition:

By Datazoom’s count, there is an average of 14 tools used by major brands to capture data from video players. Now multiply that by the number of device types you are delivering to because you need to use a different player for mobile OS, a smart TV client, and a game console to deliver your service to consumers. That’s a bear to manage when it comes to deploying code on each client and contributes to bloat on the client that can impact device performance-one of the issues you were originally trying to solve.

As an example, some analytics services leverage logic in an SDK to do failover; this means when a stream degrades for a consumer, the player will automatically seek another source for the stream. Datazoom enables customers to leverage data from the player to do more than CDN switching; they can use different ad delivery, authentication and other systems as the situation dictates.

Datazoom aims to simplify the gathering of data not by bidding to be the one data stream to rule them all, but to move the integration point away from the player. 

Pricing for the SaaS offering is done by pricing based on a combination of the volume of data that is processed, and the desired SLA level for latency. Datazoom’s premise is that this will be a more predictable cost than basing pricing on the number of video views or sessions.

Market context:

By some counts, there are more than 200 video service providers in the global market. Datazoom is targeting the biggest names in the market because they have the biggest number of tools to integrate (think NBC Universal, Sony and such). Others in the space include Conviva, NPAW, and Cedexis. CDN service providers like Akamai have their own analytics offerings centered around network and delivery performance, with Akamai also offering web application performance monitoring through its Soasta acquisition.

Customers: Datazoom already has letters of intent from major brands and is busy conducting trials with a number of potential clients.

Sign up for a free account!

Microsoft re-org focuses on edge, AI

Microsoft CEO Satya Nadella issued a memo to Microsoft employees recently – and we expect it is going to have as lasting an impact on the company as Bill Gates’ famous “Internet Tidal Wave” memo of 1995.

Entitled “Embracing our future: Intelligent Cloud and Intelligent Edge”, Nadella outlines a plan to reorganize the company around a vision of a ubiquitous distributed compute fabric that extends from cloud to ‘edge’ and infuses everything with AI.

Key Takeaways:

  • Edge computing and AI are going to be essential technologies for this new version of Microsoft.
  • Microsoft plans to invest $5bn in IoT products, services, and research over the next four years, further highlighting the shift of development focus to edge services.
  • Microsoft is showing leadership in AI by working to ensure that it develops tools to detect and address bias in AI systems.

Details

Nadella has directed a team to be organized around what he called Microsoft’s “Cloud + AI Platform,” with the goal being the creation of an integrated platform across all layers of the tech stack, from core cloud services to edge services.

Within this organization are several specific units focused on AI, including:

  • Business AI – focused on the internal application of AI
  • AI Perception & Mixed Reality – A new team taking speech, vision, MR and related technology and build Microsoft products as well as cloud services for third parties on Azure.
  • AI Cognitive Services & Platform (focusing on AI Platform, AI Fundamentals, Azure ML, AI Tools and Cognitive Services)

Perhaps less noticed, but of great significance is Microsoft’s move to address issues around the ethics of applying AI. As Facebook is learning the hard way, AI and ML applied to personal information can be misused, causing significant damage to society as well as its stock valuation.

Nadella is establishing the internal “AI and Ethics in Engineering and Research” (AETHER) Committee to ensure Microsoft’s AI platform “benefit the broader society.” Nadella promised to invest in strategies and tools for detecting and addressing bias in AI systems.

Windows won’t go away, but definitely takes a back seat in the “Experiences and Devices” team. The former leader of the Windows and Devices Group, Terry Myerson, is leaving Microsoft, and there are a number of other executive moves detailed by longtime Microsoft observer and journalist Mary Jo Foley.

Company background

Nadella has issued impactful memos before – following his 2014 company-wide email describing his vision of Microsoft as a platform company in a mobile- and cloud-first world, he cut 18,000 jobs as part of the most massive layoffs in the company’s history.

How does it company to Gates’ ‘Tidal Wave’ memo from 22 years ago? Gates’ memo was filled with more urgency, as the company was in danger of missing the internet wave;  Nadella’s latest memo is forward-looking in a different way, written from the perspective of a company that’s not far behind the competition. Still, Nadella recognizes that company’s structure and size could be a hindrance in adapting to the era of cloud and edge and has adjusted accordingly. It might not be as well remembered as Gates’ memo 20 years down the road but will be of no less importance as the industry fills out the role that edge services will play in the next two to five years.

Rafay Systems wants to ease app use on ‘Programmable Edge’

Rafay Systems is a startup that aims to ease the process of developing and deploying applications at the infrastructure edge, whether it be within a metro network, a remote data center, or at the radio access network edge. Rafay is aiming to stand out from the cloud providers and CDNs by allowing developers to bring their own custom applications, and not just using pre-defined applications or functions.

Key Takeaways:

  • Rafay Systems is positioning itself as a provider of a fully programmable edge compute service.
  • More than just an edge cloud provider, our understanding is that Rafay is also aiming to integrate a number of critical services developers need to manage the application. deployment lifecycle across edges running in different geographies. In addition, Rafay also expects to provide network services that developers can leverage to scale workloads running across various ‘Rafay edges.’
  • As with other providers, educating developers on use cases will be key, as well as differentiating from the services that CDNs and cloud providers offer.

Opportunity/Value proposition:

Founder Haseeb Budhani talked to Edge Research about the opportunity that Rafay is chasing. Budhani believes that there is a clear market opportunity for giving developers the ability to run their application logic anywhere that they need to. Developers should be able to leverage compute and storage resources at the edge of the network as a service, just as they would with any cloud service. Rafay has termed (and trademarked) this concept the “Programmable Edge.” (For further explication, Budhani outlined what the Programmable Edge should be, and some of the possible use cases in this post on LinkedIn.)

Rafay hasn’t formally launched the service yet, but is presently engaged with partners and customers and expects to deliver a Beta version of the service in the summer of 2018.

Company background:

Rafay Systems was founded in 2017 by:

  • Haseeb Budhani-
    Co-founder and CEO
    Formerly co-founder and CEO of Soha Systems
  • Hanumantha Kavuluru-
    Co-founder & VP of Engineering
    Formerly co-founder and VP engineering at Soha Systems

The co-founders’ previous company, Soha Systems, was acquired by Akamai Technologies in October 2016.

Funding

Rafay Systems has secured an undisclosed amount of seed funding from:

  • Costanoa Ventures
  • Floodgate
  • Menlo Ventures
  • Moment Ventures

Market background:

The heightened interest and use of microservices and containers has changed how applications are developed; where they are to be deployed is the key question. Performance, cost, availability of applications are all factors pointing to a need to move application logic closer to end users-to the ‘Edge.’ CDNs have the distributed resources needed to do this, and arguably already allow logic to run at the edge of their networks.

Competition:

Cloudflare Workers, Fastly Edge Cloud platform, Akamai’s Distributed Edge Compute and Cloudlets are among the examples of how CDNs are evolving into programmable platforms.

Rafay’s position is that CDNs aren’t programmable enough. Custom code can be deployed on CDN edges but only by the CDN’s internal engineering and professional services teams. To enable any developer to leverage the infrastructure edge, Budhani argues that a different approach is needed.

Another big difference between edge services from cloud providers as well as CDNs: edge services offer functions that are invoked to handle individual HTTP requests, and are bounded by time (e.g. 5 minutes in the case of AWS Lambda for a function to be performed). Functions-based environments may not be the best fit for applications that are running continuously , are generating logs or other types of output, among other factors.

Cloudflare partners with IBM Cloud to offer security and CDN

Cloudflare is moving up in the market for CDN and security services. There’s no surer sign of that than a new partnership that has Cloudflare security and CDN services being offered via IBM.

Key Takeaways:

  • Cloudflare’s deal with IBM shows the company is serious about continuing to expand in the enterprise market, and IBM is a good path to do that, even if IBM Cloud is smaller than the other players in the space.
  • Cloudflare hopes to integrate into other IBM services, including the QRadar security analytics offering, and later the development of applications that leverage Watson for Cyber Security.
  • The deal does not portend any significant change in IBM’s relationship with Akamai, but does represent Cloudflare’s increasing incursion on Akamai’s turf. 

Details:

Cloudflare’s portfolio of services will be offered through IBM Cloud, marketed as Cloud Internet Services. The services will be available via the IBM Cloud dashboard. The services go beyond the CDN services that IBM Cloud as well as other cloud providers have integrated into their IaaS offerings. Cloudflare’s portfolio includes:

Cloudflare services

Security PerformanceAvailability
DDoS mitigationCDNLoad balancing
WAFSmart RoutingRate Limiting
Bot detection/mitigationWeb optimization

Cloudflare has other services, but the table focuses on those services referenced on Cloudflare’s web page promoting the integration with IBM Cloud.

IBM is officially a Cloudflare reseller, meaning the services can be sold and deployed by IBM in any existing customer environment, including on-premise, hybrid as well as public cloud.

CloudFlare said the deal grew out of having IBM as a customer. IBM had been using Cloudflare for DDOS and WAF security as well as load balancing for the X-force Exchange business. X-force is IBM’s threat intelligence sharing platform.

Cloudflare’s ability to take this initial deal and expand into other parts of IBM sets the stage for other developments, including future integration of Cloudflare’s data set into IBM’s QRadar security analytics offering. Longer term, Cloudflare says it is looking to develop applications that leverage Watson for Cyber Security.

Company background

Cloudflare has been growing rapidly for several years, owning largely to its focus on the SMB market and numerous reseller deals with cloud/hosting providers. In 2017, there was a concerted push upmarket into enterprise accounts, and the company has significantly tilted its revenue mix towards that market, and the company reached milestones such as a $150m run rate and topping the 500-employee mark last year. 

Competition: 

Akamai is the largest CDN vendor and does a significant amount of business in security services like DDoS and WAF. Akamai is a long-time IBM partner, and just last year announced integration of its CDN offering with IBM Cloud. The deal does not, in our view, portend any significant change in the relationship.

Akamai said IBM is a significant reseller of its entire portfolio of services via the IBM Global Technology Services organization Edge Delivery Services branding. IBM’s Global Security Services organization also sells Akamai’s DDOS services and have integrated their Q-Radar SIEM with Akamai’s Kona security services. Additionally, Akamai was recently named IBM Watson Customer Engagement partner of the year for helping provide secure delivery of the Watson AI service and Watson API.

The Cloudflare deal is not exclusive; indeed, most cloud service providers (other than AWS) offer CDN via multiple vendors. Microsoft Azure offers CDN via both Akamai and Verizon Digital Media Services (EdgeCast). Google Cloud offers CDN Interconnect, which charges an fee based on egress traffic to CDN providers including Akamai, Cloudflare, CenturyLink/Level3, Fastly, Instart Logic, Limelight Networks, and Verizon.

Customers:

Cloudflare did say there are already joint customers who are using its services, including 23andMe, the genetics testing service.

Edge compute forecasts, demos from MWC ’18

Edge Research aims to keep investors, vendors, and developers up to date with developments in edge compute and network services. Our first newsletter provides analysis of some recent developments coming out of Mobile World Congress ’18. Soon, we’ll be adding coverage of noteworthy startups to our newsletter.

Startups: show where you are in the competitive landscape

One of the key challenges startups face is how to position their offerings in the marketplace against incumbent vendors. And what do many startups neglect to do when briefing industry analysts? Show a slide about the competitive landscape!

Presumably, executives have shown that slide to investors. For one thing, it’s important to show who you are selling to if you are trying to raise funding. VC’s want to see if you are selling to a large enough market to justify their investment, and to see how defensible your position is.

That’s why Steve Blank offered up the Petal Diagram as a way to show investors that there are multiple markets that a startup can sell to. Tom Tunguz, VC at Redpoint, recently wrote a counterpoint view that the Petal Diagram falls short in showing how a startup differentiates itself in a market. He suggests that the traditional matrix still works best for showing differentiation in established markets, while a pyramid diagram is effective for showing entry into a new market segment. In either case, a competition slide is essential for showing how the startup is going to win market share.

And yet, in my 17 plus years as an analyst, few start-ups ever included this slide when briefing me- I’d estimate fewer than 5% of companies in that time. Perhaps they were hoping that an unfocused sales strategy would go unnoticed, or that somebody would see that they were going straight up against a multi-billion dollar incumbent with little more a lower cost alternative.

Actually, I’m as interested as the investor in drawing a detailed picture of the competitive landscape. When I advise your potential customers and investors on who has an interesting, differentiated product and has a viable strategy for selling that product (or service), that advice comes out of knowing what the competitors are or are not doing.

In terms of a practical example, in the CDN market, there are several market segments that a company could compete in – media delivery, web performance (including dynamic site acceleration and mobile acceleration) and security. Even within security, one can argue there are multiple products, each with a different but slightly overlapping set of competitors. A petal diagram could be used to highlight a particular strength of yours. Or, as Tunguz suggests, a pyramid diagram could be used to show that your company is addressing the SMB market, rather than the hard-to-crack Fortune 500 space dominated by Akamai and a small handful of competitors.

Just make sure to show up to a briefing with a deck that has a competitive landscape slide, even if it’s just a tried and true matrix.

Big 3 Cloud providers at NAB: Microsoft wants to edit AWS from M&E market

Around five years ago, “clouds” started to roll in to National Association of Broadcasters (NAB) show in Las Vegas in earnest. With a new record attendance of 107,000 people and companies like Google and Facebook vying for attention amongst the ‘traditional’ companies like Sony, Canon, Adobe and Avid, have we reached peak cloud?

Not hardly. Five years ago, the Hollywood feature film “Flight” was the first film to have all its visual effects shots (400 in all) rendered “in the cloud.” But rendering (which is the process of generating and displaying a photorealistic object) had already been something that film houses were doing off premise for years, even if it wasn’t called cloud, or done on a public cloud service like Amazon Web Services. But it does seem that the people involved in video production at the NAB show are deep into using cloud-based services, regardless of whether the video is delivered via terrestrial signals or the internet, and announcements from companies like Avid, whose partnership to move more of its applications to Microsoft’s cloud, are just the latest sign that move is a work in progress.

The presence of the major cloud providers Amazon (AWS Digital Media Solutions), Microsoft (Azure Digital Media) and Google (Google Cloud Media Solutions) at NAB is nothing new, but their show floor presence is more expansive than ever.

Google, for one, placed itself in the forefront-literally-with a significant booth presence in the South Hall of the LVCC where the tech vendors are congregated. Google has built up its digital media offerings through a number of acquisitions ranging from foundational technologies like DRM (Widevine, 2010) and video codecs (On2, 2010) to services such as digital rendering (Zync, 2014) and applications such as an online video platform for managing OTT video services (Anvato, 2016). In the case of Anvato, Google noted that it has moved Anvato onto Google Cloud Platform (GCP), and highlighted customers such as Scripps Networks Interactive, which is using Anvato and GCP to leverage machine learning to insert personalized, local ads into live video streams on the fly.

AWS is the gorilla of the cloud market, however, and is working furiously to be a big player in digital media. In media, symbolism and flash go hand in hand, and AWS sent a message by partnering with NASA to live stream 4K video (that’s UHD, or ultra-high definition) from the orbiting International Space Station using encoding technology from Elemental Technologies (acquired 2015) and delivered via Amazon’s Cloudfront CDN. Anybody on the planet with enough bandwidth and the required UHD screen could see the broadcast. How are traditional terrestrial broadcasters competing with that? Move production workflows and distribution to the cloud-that’s how.

Along those lines, one of the bigger announcements indicating the movement of content production to the cloud was Avid’s partnership with Microsoft. Avid, one of the main players in non-linear editing systems for filmmakers, will make Azure its preferred cloud computing partner for SaaS and PaaS offerings built on Avid’s MediaCentral platform, and the two companies will co-develop new offerings as well. Avid announced Al Jazeera as a client that has moved to an all cloud-based deployment of Avid services – enterprise pricing for a combined IT stack of compute through application is an attraction, but also the ability to deploy assets in any combination of on-premises and private or public cloud provides media companies a certain flexibility that might ease their anxiety about a path to cloud deployments.

Key Takeaway

Although Amazon is the dominant public cloud provider and has been working hard to build up a digital media services ecosystem, Microsoft is not to be counted out. It’s roots in the entertainment and media industry go back to the days where they worked to displace proprietary platform players like Silicon Graphics Inc.; the relationship with Avid was cemented in 1998 when it sold Softimage’s 3D animation and graphics production tools to Avid. Microsoft and Avid will be formidable players in the move to cloud-based media production. Having announced a partnership to have updates to Adobe Creative Cloud applications (including Premiere Pro) leverage Microsoft Azure as the preferred cloud provider, Microsoft looks well positioned to be a leader in the M&E segment of the cloud market. With the other major provider of video editing tool from Apple (Final Cut Pro) likely to take a DIY approach to compute infrastructure for the time being, Google will have a hard time forging into the high end of the video production ecosystem, but still has a shot at building a presence on the content management and distribution side with its ability to acquire and integrate technology.