Select Page

The Indian Armed Forces have made quantum leaps in terms of technological advances in the past decade when it comes to C4i, Net Centric Warfare and providing power to the edge.  With projects such as the NFS, the TCS and IACCS – we’re on the right path towards “Information Advantage” by information sharing, shared situational awareness, and enhanced command and control capabilities.

However, seeing the ramifications of recent cyber attacks, the proliferation of the Social Media battlefield, and the use of latest technology by jihadists, state and non-state actors of our neighbors – I believe there is a need to re-visit our strategies from a architectural perspective so as to maintain technological supremacy over the enemy.

I believe this discussion is more important now, more than ever – with the US DoD’s focus on the “Third Offset Strategy” –  An ability to gain technological superiority in fields of anti-access and area-denial, guided munitions, undersea warfare, cyber and electronic warfare, human-machine teaming, and wargaming and development of new operating concepts.

The US DoD third offset strategy, at it’s base, has 5 common technological components:

  1. Deep learning systems – with applications such as early warning and leading indications in cyber defense , big data analytics to make sense of Facebook posts of terrorist organizations and finding patterns, etc.
  2. Human Machine Collaboration – with applications such as unmanned vehicle systems fitted with sensors, or “fog computing” data and providing power to the soldier on the battlefield for better decision making.
  3. Human Machine Combat Teaming – leading to combined operations with humans and unmanned vehicles.
  4. Assisted Human Operations – biosensors and other IOT devices and wearables to augment war-fighter capabilities
  5. Network Enabled, Cyber-Hardened weapons

The intent of this series of blog posts is to break down the architecture of the Information Systems infrastructure used by the Indian Armed Forces and propose a strategy / vision for the next 5 years. We’ll focus on the following technology areas.

  • Networking technologies
  • Computing and storage
  • Unified Communications capabilities
  • Mobile device and application technologies
  • Cyber Security
  • Network Operations

 

 

References:

3rd Offset 101: http://www.dodlive.mil/index.php/2016/03/3rd-offset-strategy-101-what-it-is-what-the-tech-focuses-are/

Net Centric Warfare: http://www.dodccrp.org/files/Alberts_NCW.pdf

Power to the Edge: http://www.dodccrp.org/files/Alberts_Power.pdf

Understanding Information Age Warfare: http://www.dodccrp.org/files/Alberts_UIAW.pdf

 

This blog post will be focused on computing services required in a defense communication network architecture, which in some ways, carries similarities with a Service Provider grade network.

 

Computing Services

With the considerable expansion of the defense networks such as TCS, AFNET, etc – the infrastructure is poised to provide the next generation of private cloud services to it’s users including excellent scalability and self-service – to cater to the dynamic and unpredictable computing needs of the armed forces. An on-premises or private cloud provides a computing environment for mission critical workloads, provides complete control over uptime requirements and provides flexibility towards management demands.

However, the private cloud should not mean just a collection of compute horse power residing in armed forces data centres. Rather, it should provide an environment not different from public cloud offerings such as AWS and Google Compute Engine – with features such as self service and scalability, multi-tenancy, the ability to provision machines, changing computing resources on demand, and creating machines for complex jobs such as big data. Such an infrastructure also needs to factor in Chargeback tools to track compute usage – so that the entities using the resources do an internal billing only for the resources that they use.

Why should the armed forces look at developing the a strong private cloud foundation

Well, it’s because the world of consumption of data is evolving at a very rapid pace. There’s no denying that the public cloud is experiencing a bigger growth; but the private cloud is going to continue to grow.

http://www.cisco.com/c/dam/en/us/solutions/collateral/service-provider/global-cloud-index-gci/white-paper-c11-738085.pdf

Acording the Cisco’s Global Cloud Index (link above), 70% of all cloud workloads will still be running on private cloud by 2018.

With the armed forces’ interest in IoT and sensor based technology, new devices getting connected everyday,and an increasing focus on mobile applications there’ll be a considerable rise in traffic coming to the data centre as the number of workloads keeps increasing. This trend is obvious when we see data centre networks showing a ethernet port shift from 10G to 40G and beyond.

Where to start building a private cloud

We’re observing a shift in trends in DCs globally. The past 15-16 years (2000 to 2016) were the times of the “Traditional Data Centers” – where the focus was on efficiency and automation; and significant efforts were made towards efficiency, speed and simplicity by consolidation and virtualization. The new trend is that of a further enhancing speeds and a move towards “digital experiences” by offering IT as a service (IaaS, PaaS, SaaS, XaaS).

The key categories of consideration in taking a decision to move towards a private cloud infrastructure are the infrastructure, the security, the policies and procedures, the management/administration and finally – the cost.

The cloud is a model for elastic compute resources that are provisioned rapidly through a self-service portal with minimal management interaction. Now if we compare this to the traditional process of requesting resources through IT, we notice many companies have layers of approval that exist to prevent misuse, provide accountability and ultimately a form of chargeback for the services as well. The definition of a cloud and its model goes against many of these existing procedures, so either the armed forces have to adopt a new model to fit the private cloud or the private cloud has to be able to adapt itself to the defense’s unique needs, policies and procedures. More than likely, it will be a mix of both extremes to allow for the benefits of what the private cloud has to offer, while still incorporating some of the existing guidelines and procedures.

So let’s break down some of these top private cloud considerations.

 

Step 1 – The Foundation – An Data Centre telemetry and analytics platform that offers visibility and forensics

The bedrock for a Private Cloud is a platform that should provide insight into the traffic flows so that operational and security related decisions are taken in a well-informed manner. It should also have the ability to either generate, or make recommendations for policy for zero-trust and micro-segmentation amongst workloads.

Cisco offers an appliance called Tetration – which is basically a Big Data Analytics engine that captures flow data from Cisco Nexus DC network switches and agents running on virtual and physical workloads, crunches those numbers and converts them into usable and actionable information – such as Cisco ACI Application profiles and application dependency requirements . But what about workloads which won’t allow you to install agents on them, such as mainframes and unix based systems? Well, you can connect these directly to the Nexus switches which results in the switch providing visibility into the applications that are touching that device. Now, this appliance doesn’t just stop there! It de-duplicates all these flows and stores these flow records in its database; thereby providing you with historical forensics for each flow that crosses your network.

Here’s what I find to be a very cool feature of the Tetration appliance – it allows you to perform a “What If” on the network before applying the change to understand the potential impact. For example – lets say you plan to modify a network security policy blocking certain traffic flows. You could input this information into tetration which will then report back telling you what all flows would break if you were to make such a change. Nice! Know more about this functionality here: http://cisco.com/go/tetration

Another example of a product that provides visibility and performs analytics into traffic flows of your data centre is Vmware vRealize Network Insight. It basically analyzes netflow data and assists you in creating security groups and firewall policies. It monitors both the physical and virtual network topology. Know more about it: https://www.vmware.com/products/vrealize-network-insight.html

 

Step 2 – Choosing a Private Cloud platform

There are 3 primary players in the market – Microsoft, Vmware, Openstack.

Microsoft offers its Azure Stack  private cloud – which means it’s bringing its entire Azure stack from the public cloud to a company’s data centre. The end result of Azure Stack for customers will be something that looks and feels like the real Azure, although it will be running on their own hardware and under their own management. In a sense, this is the natural progression of Windows Server as the center of gravity of operating systems has shifted from a particular runtime to creating a cluster-wide management system with many runtimes and allowing for many different styles of compute and storage.

Vmware has their story of the Vmware SDDC stack – which comes with a Cloud Management Platform called Vmware vRealize which provides automated delivery of infrastructure and apps, realtime log management, operations management, costing and usage metering.

OpenStack is one of the most popular open source cloud operating options today. It has the ability to manage compute, storage and networking and deploy them is an easy to use, but somewhat feature limited dashboard. Unlike VMware and Microsoft, OpenStack does not have its own hypervisor. It can be used with any hypervisor – however, it’s typically used with KVM which is also opensource.

In summary – there’s a tradeoff in going in for a private / vendor-provided cloud platform v/s Openstack. While both can be deployed on-premises and are secure; the vendor-provided one is better supported and typically offers easier installation. The vendor provided options have their share of disadvantages – vendor dependency and not being fully open whereas Openstack has a steep learning curve involved.

Cisco approaches the Private cloud as an integrated approach – to offer reporting, analytics, lifecycle management, RBAC, showback/chargeback, Open APIs covering both physical and virtual infrastructure. Here are the components of a Private Cloud architecture as Cisco defines it:

  1. Infrastructure automation – providing self-service consumption of DC infrastructure, and multivendor automation and orchestration along with built-in performance monitoring and capacity planning
  2. Cloud management – for deploying and managing applications across DCs, private and public cloud environments.
  3. Service management –  using self service catalogues for cloud, application and infrastructure services.
  4. Big data automation – enabling a single touch deployments of Hadoop clusters on Cisco UCS with integrations with major distributions such as Cloudera, MAPR, HortonWorks and Splunk Enterprise.

 

Step 3: Service Provider NFV

The defense network of the future can expect that video will be the majority of traffic that will flow across it. Also, there will be thousands, if not lakhs of IOT device connections. The Armed Forces service provider networks will need to become virtualized so they don’t remain monolithic, non-pliable, costly to maintain and having long innovation cycles.  Deploying NFV will allow the running of network functions on general-purpose hardware, and allow the scale up, scale down and orchestration of VNFs (Virtual Network Functions).

Abhijit Anand

Subscribe to my News Letter

Be informed of when I add write new blog posts. I do not spam - Promise.

You have Successfully Subscribed!