“…and Leon is getting larger!”

For the pop culture-savvy, this is a bit from the move “Airplane!”. For those in the industries of telecom and industrial process and control it’s a warning that fog computing has rolled in. It’s a gratuitous segue, but any reference to a line by Johnny is worth the effort, however clumsy.

By empirical definition, “fog” is the condensation of water vapor in the air, at dew point, where low-level clouds can form. Fog computing, however, is an emerging section of the wide area network that is the natural extension of “cloud” networks.  It is worth mentioning that the proper definition of “fog computing” is the border between data center and the wide area network (WAN).  In a fog network the compute function is distributed to the edge of the network. It’s there that data is acquired and/or created and can be acted upon at the most logical and efficient place between the data source and the cloud.

There are some differences between fog and cloud computing. The latency of application is low requirement in fog computing while in cloud computing it can be much higher. Fog computing is highly distributed and designed for real-time interactions rather than centralized and designed for batch processing.  Additionally, the communications links to the cloud are both terrestrial and wireless whereas communications to the fog is primarily wireless.

What is Driving Fog Computing?
The Internet of Things, or IoT, has many definitions. In terms of industrial processes, IoT can be defined as a group of sensors and actuators linked by wireless medium to perform distributed sensing and actuation tasks creating a sensory network.  A sensor is a device that detects and responds to some type of input such as light, heat, motion, moisture, or pressure.  The output is generally a signal that is transmitted over a network for processing.  Actuators exert physical actions such as open, close, move and focus.

The massive growth of sensory networks is creating a situation where the line between WAN and data center becomes increasingly obscured.  IoT means many things to many people, but they all mean massive scale both in terms of the number of devices and in terms of the volume of data generated by these sensory and control networks.

First generation IoT applications all follow the similar architectures – star-hub or branch-tree. With each, the intent is to collect field data and measurements and then transmit them to the data center for processing, classification and action.

Next generation IoT will need more far distributed sensor and actuator nodes must have near real-time processing of the collected data at the point of acquisition, and require compute power to be available in the WAN before disseminating information to the data center.

Fog Versus Edge Computing
Although these terms are often used interchangeably, there is a subtle difference. In a fog design, sensory data is transmitted across a local network from many endpoints to a shared compute platform for processing. In an edge design, the sensors are directly connected to the compute platform.

Edge computing allows for faster processing, reduced latency and removes a possible failure point as it eliminates the transmission step before processing.  Fog computing is more scalable and comes at a slightly lower cost as the computing node is shared among more data points.

Fog/Edge Computing Characteristics
Fog computing nodes are deployed away from the main cloud data centers at the edge.  Cloud computing on Fog nodes enables low and predictable latency. Fog application code runs on fog computing nodes as part of a distributed cloud application. The fog application may run specific code that may only be required for that location specific context such as serial to IP conversion.

Fog computing nodes are widespread and provide applications with awareness of device geographical location and device context.  It also can cope with the mobility of devices, for example, if a device moves farther away from the current servicing computing node, the fog node can redirect the application on the mobile device to associate with a new application instance on a fog computing node that is now closer to the device.

The Impact on Network Communications
Within the Internet of Things are multiple heterogeneous wireless communications such as WiFi, ZigBee, Bluetooth, LTE and cellular coexist.  The enablers of the IoT are the communications equipment manufacturers, the telecommunications carriers, and the implementation of enabling software technologies such as Network Functions Virtualization (NFV) and Software-Defined Networking (SDN).

Communications equipment manufacturers such as Redline Communications (www.rdlcom.com) design communications equipment for deployments over very large coverage areas that can scale from a few remote to over 100,000 devices.  Freewave Technologies (www.freewave.com), another communications equipment manufacturer, has developed wireless machine-to-machine, M2M, communications equipment enabling direct communications between devices on the sensory network.

The telecommunications carriers such as AT&T (www.att.com) and Verizon Wireless (www.verizonwireless) are upgrading to the next generation of networking 5G.  While the focus on mobile broadband will continue with 5G, support for a much wider set of diverse usage scenarios is expected. The three major usage scenarios include: (1) enhanced mobile broadband; (2) ultra-reliable and low-latency communications; and (3) massive machine-type communications from IoT.  For the carriers, 5G is a scalable, energy-efficient, secure communications infrastructure.

To deliver the features proposed for 5G and beyond, it will be necessary to design and deploy a network architecture that moves away from these proprietary solutions and toward open platforms that offer significantly improved scalability, as well as increased efficiency, agility and flexibility. Additionally, these open platforms offer more programmability and automation capabilities to simplify infrastructure management and complexity. Although many service providers are discovering the benefits of implementing some aspects of NFV and SDN within their current networks, the role of these technologies will be vastly expanded beyond their current implementations and will become foundationally critical to 5G.

Keeping it Running
As dependence on IoT devices increases, the infrastructure that makes the IoT a functional reality must improve.  Customers will call for stringent Service Level Agreements (SLAs).  The design of a fog computing infrastructure will require automatic detection and recovery from outages.  Additionally, it will require the ability to provision, maintain and repair critical infrastructure will be essential as dependency on IoT functionality increases.

One of the most overlooked and most critical components of fog computing is power.  There lie assumptions around availability of power that is being made by these IoT sensory networks and 5G small cell deployments. Facilities will not always be present – requiring a new type of renewable, portable power. Business use cases around portable power/compute/network services include:

·   Oil & Gas

·   Agriculture

·   Security and Surveillance

·   Transportation

The ability to provide reliable uninterrupted power to the sensory network, the communications network and the compute platform is the foundation that the reliability of fog computing network is built.

Solis Energy (www.solisenergy.com) has addressed the requirements fog computing by creating small self-contained hardened tier 1 and tier 2 data centers.  These systems integrate compute, monitoring, battery and power into a ruggedized cabinet that can be installed virtually anywhere and powered by solar, wind, or regular grid power as well as have connections for auxiliary generator power.

What’s Next?
Fog computing, IoT and the cloud are already making an impact on the industrial process.  The success of fog computing will be determined by its reliability and resilience.  As fog computing evolves remote power and compute platforms will become tightly integrated into both the 5G and IoT networks, providing power, compute and network connectivity wherever it’s needed.

 
 
Earlier this summer Apstra announced AOS - a vendor agnostic datacenter automation platform designed to simplify the complexity of service creation in the datacenter as well as the broader SDWAN.  The company offers an interesting and innovative solution to a problem that has, to date, hindered the growth of software defined WAN services.

This blog isn't an endorsement for Apstra, although we do think it cool, but it does serve as an example of another firm recognizing the problems with today's software centric way of looking at the network. 

Datacenter engineers are typically Linux savvy and understand the administration, care and feeding of large complex Linux environments.  Network Engineering - heretofore SDWAN engineers - understand the complexity and nuance of wide area networking.  Typically router centric, traffic centric, engineering focused on the WAN itself and not anyone physical location.

Its like two ships passing in the night - Linux savvy administrators are not versed at all in the way of WAN protocols or wide area networking.  Conversely the router jockeys running the WAN network are not at all versed in the way of Linux in general, much less the finer aspects of scripting in Python.  Together the groups have the skill set to run end-to-end services.

We at SDNNFV.NET believe this is a big reason why OpenStack, so successful with compute and storage, has not been as efficient with its success in the network component.  It requires a different engineering discipline than does compute and storage.

Hopefully we can share our collective knowledge, WAN engineers learning Python and Linux engineers learning there is life after IPTABLES.  Until then its "Who's on first?"
  

 
 
Earlier this month the Federal Communications Commission (“FCC”) released its much-anticipated Notice of Proposed Rulemaking (NPRM) setting forth and seeking comment on proposed rules to govern the privacy practices of broadband internet access service providers (BIAS providers). proposed new rules around consumer privacy. 

As one would expect the decision by the FCC has caused much in way of commentary - some this web site agrees with and some that we do not.   Our positions on the matter are not of relevance, however the reader's understanding of what is at stake is.

Feeling industrious the FCC has decided to provide a new area of debate by introducing their definition of an "edge service".  In reading the commentary it is clear the FCC is using the definition of edge services to be everything from websites, web-based email and streaming services, to mobile applications and search engines.

Its a clear and focused attempt to regulate the ISP market in general and the US cable (MSO) industry in particular.  It's another obvious and populist attempt to burden the cable broadband industry.  Saying the rule changes are being offered to protect the consumer, and then deliberately segregate those services that are the most intrusive and privacy bending is more an act of political agenda and less so for what is best for John Q. Public.

Google and Facebook to name a few are the most egregious violators of consumer privacy on the internet today.  Yes its an opt in service but a subscriber must accept the terms of service to be able to access the service at all.  Certainly not a free market solution - but then again when does the FCC consider the free market.

Ironically consumer privacy is whatever the government says it is, unfortunately.  Its our view that the question is not just whether the information derived from a subscriber's IP address is the ISP's - or  the web application's.  Rather after reading the release and commentary this week its clear the FCC believes the data from an IP address are neither the ISP's or the web application's - but rather it is the government's.  As with all things privacy is relative...
 
 
 
Last month the EU provided Google with an early tax day present - it decided to move anti-trust charges against the web giant.  In its filling by the Wall Street Journal this move by the EU - should the EU prevail - could result in charges and fines that could exceed $6 billion (with a b).

The European charges focus on complaints that Google uses its dominant Internet search engine to favor its own services over those of rivals, people familiar with the situation said. Rivals say Google search results in areas like travel, shopping and maps increasingly favor Google’s own offerings, that its customers pay for, over links to similar on-line services run by rivals.

All to which we here at SDNNFV.net labs say... No Kidding??  Of course customers who pay Google for its search optimization services will rank above those who do not.  After all what is "search optimization".  

Of course more ironic to this whole discussion is that Google is being accused of providing preferential treatment to customers willing to pay.  

Gosh kids... where have we heard that before?  I know - when Google and other OTT players like NetFlix, Hulu and the like all complained to the FCC that service providers were asking for payment to provide a better quality of network service. 

If it sounds like the same thing only different - it is.  Google is being accused of the very thing it complained about to the FCC.  We here at SDNNFV call that karma - and yes we believe Google is guilty of all the charges the EU has brought forward.  

The "Don't Be Evil" moniker aside Google is no longer the Internet phenomena that captured people's imagination.  They are a multi-billion dollar juggernaut that starts projects at a whim that remain in beta forever or are strictly designed to hammer telecommunication providers to improved services (see Google Fiber).

For Internet policy - much like life in general - we believe in laissez faire - the market will always prevail.  Service providers should be able to offer qualities of service just as Google should be able to provide quality of service for its search results.   

In the political climate these days Google stands little chance of escaping the grasp of the EU - much as the service provider community did not escape the FCC.  Karma Google... karma...
 
 

Earlier this month the FCC released the details of its decision to reclassify Internet services as a Title 2 regulated public service (copy of release can be found here:  http://www.fcc.gov/document/fcc-releases-open-internet-order ).

We'll defer the political debate currently embroiling the telecommunications industry.  There is more than enough of that type rhetoric to go around.  Left leaning or Right leaning - there is bipartisan agreement this is the single most important piece of legislation affecting the telecommunications industry since the Telecommunication Act of 1996. 

Back in 1996 there was also the rhetoric around the politics of such a legislation.  Politics aside it was the catalyst to what became a sea change that created an entirely new set of industries.  Based on the concepts of IXC, ILEC/RBOC and CLEC the telecommunication boom occurred and with it unprecedented growth and opportunity.  The Y2K hangover aside the amount of industry and technology created was astounding. 

Of interest to this author is how large network operators will react to this new legislation and what it means for technology vendors. As was the case in 1996 some of the informed professionals predict a precipitous drop in technology investment as the industry tries to come to terms with the new rules. 

We beg to differ...

Suffering from a never ending glass-is-half-full approach we offer a pragmatic but optimistic perspective.  No matter the outcome of this legislation - and to whatever form the final legislation makes of this new regulation - one thing is clear - operators will need to now be prepared to show compliance with whatever those regulations end up being.

It does not matter to what degree compliance Title 2 regulation will require - the fact it now exists changes everything. 

Be Careful What You Ask For

The irony of it all, as the left leaning will learn, to insure operators are not violating the new privacy and net neutrality regulation operators will collect, store and report on more customer traffic data than ever before.  

To insure a subscriber’s privacy is protected the operator will need to collect everything.  

As a result there is going to be a boom in telecommunications investment around network heuristics and subscriber behavior that will drive capital investment in operator networks the world over.  Traffic sampling will be the exception and not the rule - much to the DPI industry’s excitement.  

The Opportunity

Initially DPI vendors will benefit but the reality is to to be 100% compliant the economics of a full DPI deployment can be a disaster to the operator.   There are new opportunities for operators who learn how to build out their subscriber reporting systems on technologies that are not DPI dependent. 

We would like to think the end result of this FCC legislation will be to create another broad rise in network investment and emergence of new technologies to benefit everyone.
 
 
The recent security breach at Anthem is another example of how difficult it is to secure any type of data on the Internet.  More and more personal data is accruing on-line faster than ever before.  Its a wonder the Anthem breach doesn't occur more often.  Now more than ever its important to not only secure your data but also your activity on-line.  Security firms are not blind to this opportunity - the growing market in on-line security and security enabled solutions, for both business and personal use, is accelerating.

Along with this growth are the number of attack vectors - there is more data on line than ever before and there are more ways to access that data than ever before.  What was considered unrelated and antonymous systems only a few years ago are now forever interconnected.

The Target breach from December 2014 was a result of entry via the company store's HVAC system!!! I am confident the Target IT team invested in all the appropriate systems to secure their data - but I am sure none of them included securing the store's environmental control systems.  

The Internet of Things (IoT) will create opportunity for hackers and malcontents alike - securing machine based IP communication services - the heartbeat of IoT - is already exceeding normal security measures. Organizations are working to provide a more secure on-line experience - which we at SDNNFV.net labs applaud with much vim and vigor.

However with even more vim and a dash of vigor we ask - and then what?  After all measures are taken it still won't be enough.  Data breaches will still occur.  Now we say this not to cast the security industry in a disparaging light - but rather ask if we look at the problem a bit differently...

We say - Trust no one.  Stop assuming the network you are accessing is secure... its not.  No matter the access - be it commercial or municipal WiFi or your company's internal IT network - assume those networks are compromised and secure the data yourself.  The panacea would be to provide security for data regardless of the underling broadband network.  

IT departments can continue to invest in their networks - and security vendors can continue with new and improved PowerPoint solutions - just don't trust them to solve every problem - they can't.  

That needs to be left  up to you... after all its your data... 
 
 
This week President Obama announced changes to the US policy to the island of Cuba.  I'll leave politics aside - its to emotional an issue to the ones closest to the discussion.  Eventually relations will normalize and the US embargo that has kept Cuba at bay will finally go away - 

When it does Cuba will be an island released from a time warp - for the people of Cuba it has been 1959 since... well... 1959.  Less that 5% of the island's population has ever been on the internet.  Mobile wireless service is completely not existent.  There is one (1) telecom on the island and the overall infrastructure is very similar to what Andy Griffith used in Mayberry - analogue POTS service.  

I think this poses a wonderful opportunity for operators and professionals alike.  Starting with the retrenching of fiber to the entire island alone will be a multi-year endeavor.  While the fiber is trenched wireless towers will accompany this build out - providing a massive influx of capacity and opportunity for telecom providers willing to make an investment.  

What an amazing opportunity for technology providers and the people of Cuba alike-
 
 
SDNNFV.NET is an organization that is focused on providing open source solutions for businesses and service providers alike.  Solutions that can scale and be easily deployed at a remote office, corporate campus or a carrier core all have different and unique requirements.  So while its true SDNNFV.NET's flagship platform is a focused and purpose built solution the applications that can be powered by Bedrock is as diverse and unique as our users.

This blog will be much like our user base... diverse and energetic, unafraid of addressing issues outside of the scope of SDN or NFV applications - but are related nevertheless.  We hope to use this blog to help inform (correct) misconceptions in the marketplace and help thin the rhetoric in the marketplace from vendors and "experts" alike.

This blog and its content are entirely the views of SDNNFV.NET's management and management alone - no outside influence here - just honest and (we hope) informed opinions designed to stimulate the open dialogue.