Information; Renaissance or Revolution? defines Renaissance as “the activity, spirit, or time of the great revival of art, literature, and learning in Europe beginning in the 14th century and extending to the 17th century, marking the transition from the medieval to the modern world.[1]” The Renaissance lasted for a couple centuries, shaping what has come to be known as the modern world. Great artists and thinkers helped transform a brutal age, into an enlightened one. The effects of the Renaissance radiated through Europe, creating major shifts in art, science, music, religion, and humanism[2].

Revolution is defined as “a radical and pervasive change in society and the social structure, especially one made suddenly and often accompanied by violence.[3]” I’m not sure that the increase in the flow of information has been sudden and accompanied by violence, but I think that the sudden transmission of information has created a world where sudden and violent change is more readily possible.

The invention of the printing press gave birth to the Information Age. Never before had humans been able to spread information as far and as easily as they could with the printing press. Previous information had to be hand written or copied before it could be handed down or distributed, all at great costs and limited to those with the resources and power[4]. The printing press was the start of the Information Renaissance.

As the technology of the printing press continued to evolve and the means to which humans could communicate to one another, information began to flow faster and further. Today, the Internet has thrown the Information Renaissance into a full-blown revolution! No longer does the information have to be printed in a physical form, with computers and electronics, it can be instantly transmitted across the globe. No longer would it take days or weeks for information to trickle down and no longer would the power of information be controlled by large governments or corporations. Information has now truly become property of the people. There are parts of the world where governments still take great pains to control the flow of information amongst the people, but information always seems to find people who are looking for it.

The Internet has become one of the most powerful tools, but it’s also become one of the biggest liabilities. Information can flow freely in cyberspace, but cyberspace has limited ways of making sure that the information that flows freely through its veins is accurate, correct, and free of trickery. Misinformation flows just as freely as information, increasing the need for humans to be able to understand the difference between the two.


[1] The definition of Renaissance. Retrieved December 06, 2016, from

[2] Renaissance. Retrieved December 06, 2016, from

[3] The definition of revolution. Retrieved December 06, 2016, from

[4] The Printing Press. Retrieved December 06, 2016, from

Drones: Hype or Here to Stay?

In today’s high tech world, drones are becoming common place in a growing number of private sector markets, no longer limited to only the world’s most advanced military. As computer and electronic technology continue to become more powerful and become cheaper, more private companies are able to access and afford the technology to develop drones for private use. Some of the area where I have found innovation taking place in the development of drones are: security and surveillance, industrial and infrastructure inspection, shipping and delivery, precision agriculture, mining, search, rescue, and disaster management, and storm tracking and forecasting.

The first thing that people think of when they think of drones, is the military. The military has been developing drones for use in warfare for years and have some of the most advanced drones in the world. They use drones to spy on the enemy, intercept enemy communications, and more recently, they are using drones to deliver munitions to enemy targets. Because of the varying missions they are being designed for, the size and costs of military drones also vary. Some drones are small, man-portable units that are controlled by the soldier in the field to gather intelligence on the enemy. These drones are more similar to the drones that are available in the private market. For long-range missions, the military has developed large drones, the size of a full-sized aircraft, which are controlled from continents away that can deliver a wide range of munitions or long-range surveillance equipment[1]. Due to the high level of sophistication and costs, large drones are not see an often in the private sector.

Security and Surveillance

As I mentioned above, some of the first uses of drones in military operations were to gather surveillance on the enemy and this is one of the first private-sector uses of drones that I was able to find. Private companies are developing drones that are similar to military drones and marketing them to law enforcement and other government agencies to use as a means to gather intelligence from the air. Larger government agencies and police departments have been using helicopters for years, but the expense of owning, operating, maintaining, and staffing a helicopter is something that limits their use by large amounts of law enforcement agencies. The advantage of using drones over helicopters is primarily costs. Drones are allowing even the smallest of law enforcement agencies to gain access to aerial imagery that may provide vital intelligence for situations that may arise in today’s world. The downside of using drones over helicopters is the limited range that smaller drones can operate. Helicopters are able to remain in the air longer and quickly react to situations that may arise on the other side of the city.[2]

The recent increase in terrorist activities taking place across the world are also driving the use of drones in the security and surveillance market. Governments around the globe are spending more money and putting more resources behind the use of drones for law enforcement agencies.[3]


Utility providers have also used helicopters for a long time to inspect their electric lines, and gas and oil pipelines. Power lines and pipelines are often placed in very remote locations and cover large distances, making it difficult for inspectors to access them on the ground. Historically, these companies would contract an external firm to provide aerial inspections, but drones are changing the way these companies are going about doing their inspections. Again, the cost savings of using drones over helicopters is the biggest advantage. Another advantage of using drones is the fact that they are autonomously operated and are able to function closer to dangerous locations where companies are not having to take the risk of having a person placed in harm’s way. A drone is able to get much closer to a high-voltage power line to provide inspectors a closer look at the lines and towers. The smaller size of a drone also has the advantage of being able to get into tighter spaces where a helicopter can’t go.[4]

The Global Positioning System (GPS) has also advanced to the point where even the smallest of drones are equipped with GPS sensors. Another advantage of using drones for inspection is that they can track the drones exact location in real-time. The drones can also be programmed to automatically follow GPS way points to make the long inspection process quicker.

Shipping and Delivery

Similar to a large military, larger corporations are providing a lot of the drive for the development of drones in the public sector. Amazon is one of the companies that is driving this development with the goal of delivering their goods directly to a customer’s door by use of drones. Currently, Amazon has to rely on the current postal service infrastructure that they do not control to deliver their products. Having their own door-to-door delivery system would be a huge advantage to Amazon, reducing their reliance on other organizations. Having their own delivery system would also reduce the time it would take customers to receive their purchases, increasing the demand for Amazon’s services and increasing Amazon’s market share in the retail market.

Because of the high-profile nature of Amazon’s development of a drone-delivery system, they are increasing the visibility of the need for regulations on the drone market. Currently, drones are being used in a very limited roll and are not as visible to the average citizen, but if drones are going to be delivering packages to your neighborhood, the flight paths of drones are going to have to be limited to reduce the chance of clouding the airspace. This brings up one of the biggest downsides of developing drones from the private sector and that’s the fact that changes in the law could have and instant and devastating effects. Laws are being passed already that are requiring registration for drones and placing a growing number of restrictions on their use. There is a large risk that a company will invest large amounts of time and resources to a technology that could be totally restricted to the point where it would not be profitable.

Precision Agriculture

In an article written by Christopher Doering that was published in USA Today online, ”drones are quickly moving from the battlefield to the farmer’s field — on the verge of helping growers oversee millions of acres throughout rural America and saving them big money in the process.” Currently, farmers are having to rely on satellite technology, aircraft, and physically walking their fields to find signs of insect problems and watering issues that can affect crop yields. A drone will be able to save them a large amount of money and save them substantial amounts of time. Drones will also allow farmers to tailor their use of pesticides, herbicides and fertilizers based on the needs at a specific point in a field that the drones will be able to constantly inspect. The possible use of drones in agriculture lead the Association for Unmanned Vehicle Systems International to predicts that “80% of the commercial market for drones will eventually be for agricultural uses.”[5]


Mining is also an industry that is looking to use the services if unmanned drones to save costs. Along with using aerial drones to inspect the outside of a mine, companies are starting to use tracked drones to descend into mines where it has become too dangerous to send a person. Having a drone lost to a cave-in is by far cheaper than having to organize an entire rescue operation, and drones don’t have a family that will miss them either. Drones are also a good fit because they are able to collect mapping and condition data that can be further used to plan out the mining operation.[6]

Search and Rescue and Disaster Management

With recent natural and man-made disasters, companies are starting to make drones to aid in the search and rescue of victims and to be used as tools for disaster recovery workers. Usually when a disaster happens, helicopters and aircraft are flown in to help search for and rescue people affected by the disaster, but that takes time to organize the response and it takes a large amount of resources to make it happen. Having smaller, cheaper, and easier to use drones that can use video camera and sensors to locate victims is a faster and cost effective solution. Along with the smaller size and price, more cities and government agencies will have access to drones and this will drastically reduce the time it takes to deploy them in the field. Along with the search and rescue function, drones will also be able to monitor the conditions, helping officials make quicker and more informed decisions on how to react to changes in the situation.[7]

Similar to the use of drones in mining, drones can go where people just simply can’t go and that was the case in the aftermath of the Fukushima nuclear meltdown in Japan. Drones where able to reach places that were not safe for humans to go so vital data could be collected to help officials access the situation. [8]

Storm Tracking and Forecasting

It’s hard to determine if this is a private sector opportunity for drones when a lot of the storm tracking is done by the government and the military. Some of the drones used for this task are drones that have been handed down from military use to NASA, such as the Global Hawk.[9] The cost to operate these drones is actually high compared to the smaller, private drones, but the biggest advantage to their use is their ability to get into the eye of a storm without having to risk the safety of a pilot and the larger expense of operating a larger, manned aircraft. The unmanned drones used by NASA are still large enough to carry a wide range of sensors to collect and transmit critical date back to the forecasters on the ground so that proper steps can be taken to minimize the effects of acclimate weather.


Drones are starting to be used in more and more different arenas in the private sector and that trend does not seem to be changing. People are seeing them more and more and more people are taking an interest in experimenting with their usage. For now, the average person that is interacting with a drone is for recreation as a hobby, but as companies continue to develop drone technology as a tool for productivity, more companies will start using drones to perform vital business duties. Drones are a cheap alternative to much more expensive manned aircraft and because of the low costs, more and more jobs are evolving into unmanned jobs. The biggest threat to drones, are laws. As mentioned before, laws that restrict the usage of drones could instantly turn a new, thriving business endeavor into an extinct one.

[1] Weinberger, S. (2014). The ultra-lethal drones of the future. Retrieved November 03, 2016, from

[2] Bond, M. (June 5, 2014). MultiBrief: Law enforcement experimenting with surveillance drones. Retrieved November 03, 2016, from

[3] UAV for civil security: Police drones, traffic control, monitoring, etc. (n.d.). Retrieved November 03, 2016, from

[4] UAV inspection for the Power and Utility industries. (n.d.). Retrieved November 03, 2016, from

[5] Doering, C. (2014). Growing use of drones poised to transform agriculture. Retrieved November 03, 2016, from

[6] 10 Incredibly interesting uses for Drones – Drone Buff. (2016). Retrieved November 03, 2016, from

[7] Search & Rescue: UAVs / drones for fire service, monitoring etc. (n.d.). Retrieved November 03, 2016, from

[8] Woollaston, V. (2014). Fukushima, the aftermath: Eerie drone footage reveals the apocalyptic wasteland of Japan’s abandoned east coast. Retrieved November 03, 2016, from

[9] Richardson, B. (n.d.). Drones could revolutionize weather forecasts, but must overcome safety concerns. Retrieved November 03, 2016, from

Lights Out………and Everything Else Follows…..

It shouldn’t come as a surprise to people that everything is being controlled through the Internet. As IoT devices become common place and US citizens become ever more dependent on their smart phones and the Internet, it should become clear that even the huge things in our society are being controlled over the internet. What might not be as clear, the fact that one piece going down will have a domino effect on the rest of the system that we rely on.

It’s obvious that we are a society that runs on electricity. The entire country has become very dependent on power, without it, everything we have built will come to a grinding halt. Power goes down, communications end, Dr. Kovac won’t be able to get his Facebook messages, stores won’t be able to get shipments, and it won’t be long before people take to the streets.

One of the key points that Ted Koppel makes in the book is that we need to build security into our plans, not just add security after the fact. Keeping our networks secure should be the first thoughts into building a network. If the network is not secure, nothing on that network is secure. He also points out that no network is going to be totally safe and the amount of attacks and attackers is only going to increase, and so should the security measures.  If we don’t start to focus on network security, we are going to find ourselves sitting in the dark……

Cloud Computing Overview

What is the Cloud?

Simply put, the cloud is a network of servers accessible through the internet. This means that to access the cloud, all you need is a computer that is connected to the internet. The network of servers that you are connecting to is handling running the applications, storing the data, and providing the necessary infrastructure for everything to work and connect. This is the biggest advantage of moving to the cloud: our clients do not have to invest large amounts of time and resources (money) on building, managing, and servicing their own network and server infrastructure. Partnering with a cloud provider will allow our clients to focus less on the physical IT side, and more on their products and services. [1]I will highlight more advantages and highlight some of the potential weaknesses of the cloud later, but first, let me bring you up to speed on the basic services of the cloud.

Software as a service (SaaS): This is where applications are hosted and ran on the cloud infrastructure and accessed through the intent. Think of Google Drive, Salesforce, and other CRM clients. Advantages: 1.) Reduces the time it takes for users to install and configure the application on a local servers and individual access points by having the install and configurations set on a single cloud-based server, also decreasing deployment time. It also allows for new releases of software to quickly become available for customers to use by removing the need for customers and clients to download and install new releases. 2.) SaaS reduces costs by reducing hardware and maintenance costs and by allowing smaller and medium sized businesses to access software for a much lower licensing fee. 3.) Cloud-base software has high scalability where features and services can be quickly added or removed based on current needs and requirements of the customers and clients.[2]

Platform as a service (PaaS): This is where the cloud provide delivers hardware and software tools that are needed for the development of the applications as part of their service. Advantages: 1.) PaaS reduces the need and costs of having in-house hardware and software development tools. 2.) It also reduces the need for additional hardware and software required to provide operating systems, middleware (such as databases and servers), and security tools. 3.) PaaS provides quicker application performance monitoring (APM) so our clients can access user data more quickly, allowing better customer responsiveness.[3]

Infrastructure as a service (IaaS): This is where compute resources are hosted on the cloud and provided on demand though the internet by our clients. Our clients are able to make adjustments and monitor their cloud-based infrastructure through a Web-based graphical user interface that serves as an IT operations management console for the overall network.[4] Advantages: 1.) Reduced costs to provide upkeep, ensure uptime, and maintain and upgrade hardware in an in-house network. It also reduces the cost of having too much infrastructure that is not needed 2.) It’s a completely scalable and flexible solution to infrastructure demands of our clients and their customers to quickly allow the change in size as demand increases and decreases. 3.) It’s a cost effect way of preparing for disaster recovery. If a disaster strikes out client’s facility, internet access is all that will be needed for our clients to reconnect to their infrastructure and data that is stored remotely and, in most cases, different physical locations.[5]

Advantages of Cloud-based Service.

There are several large advantages for our client to migrate to the cloud, so of which were covered in the details above, but let me summarized them here.

  1. Cost Savings: There are many ways in which the cloud can save our clients on time, resources and capital, the largest being removing the need for a large in-house network infrastructure. The client will also need fewer employees to manage their infrastructure and create, develop, and test applications that their customers are using to access their goods and services. Our client will also have fewer resources sunk into preparing for disaster recovery and business continuity in the case of worst-case scenarios.
  2. Flexibility and Scalability: The cloud allows our clients to quickly increase or decrease their infrastructure and software to better meet the demands of their customers. It’s also possible for our clients to make faster changes to the services they are offering and the cloud decreases the amount of time it takes for our clients to rollout these new changes to their customers, increasing their customer responsiveness. Scalability is tied directly to the pay-as-you-go model so our clients will only be paying for the services their customers are using, saving on resources that are not being utilized.
  3. Reliability: Along with being more cost effective, cloud-managed services are more reliable than in-house IT infrastructure. Most cloud providers offer Service Level Agreements (SLAs) which guarantee 24/7/365 and 99.99% availability that would normal cost our client’s large amounts of staff, resources, and time to meet the same requirements with their own, private networks. Along with providing reliability, cloud providers offer a high level of redundancy that would also cost our clients large amounts of resources. If one of the cloud-based servers goes down, our client’s data is accessed from a different server, drastically reducing downtime.[6]
  4. Manageability: Cloud-based services provide our client with web-based management tools so they can quickly make changes to their products remotely. There is no need for any of the tools or applications to be installed locally, reducing the demand on their IT staff. Again, this will all depend on the SLA negotiated with the chosen cloud provider.

Disadvantages of Cloud-based Services.

Along with the many advantages of the cloud, come some disadvantages. Many of the disadvantages of the cloud are also disadvantages of having an in-house IT infrastructure as well. Here are the disadvantages that I’ve found:

  1. Downtime: Having everything moved to the cloud drastically increases our client’s dependency on a constant connection to the internet. It also increases the amount of fail points where internet connections can be lost. Depending on where the cloud provider is storing our client’s data, there can be an increased amount of hardware that could fail, causing more downtime. Guiding our client in their selection of a quality cloud provider and negotiating SLAs will help mitigate this risk.
  2. Security and Privacy: This is the big one. Our clients will trust their data to an outside company. This means that not only is there a threat that someone might hack into their system, but it also means that there is a risk that someone inside their company might gain access to their data. There are ways to mitigate this risk on both our client’s side and the cloud provider’s side. [7]Again, a lot will come down to the SLAs and the trustworthiness of the provider, but these are risks that our client will also have with having an in-house solution, but in the case of using the cloud, providers are a better cost effective solution to combating data breeches. Managing data is the largest part of their business and they will have a better trained staff to handle the security threat. Where would you rather keep your money, in your house where you provide all the security, or in a bank where they have a much stronger and more secure facility?
  3. Vulnerability to Attack: In the analogy above, a bank would be a more secure location, but it is also a location that bad guys know will have what they are after. The cloud is similar and the cloud does increase the number or access points and vulnerable locations where a bad guy could get in, but again, security is going to be the bread and butter of what a cloud provider is doing. Having our client include us in the selection process for a cloud provider will be one of the best ways to insure our clients that they are making the best decision on migrating to the cloud.[8]
  4. Limited Control: Cloud provides are not going to be able to provide every single service that our client may be interested in. If hour client has an in-house system, they are going to be virtually limitless on the kinds of services they are going to be able to prove, but that limitless comes with a price. This is another talking point to have between our client and potential cloud providers.[9]

[1] Fee, J. (2013, August 26). The Beginner’s Guide to the Cloud. Retrieved November 10, 2016, from

[2] Sylos, M. (2013, September 18). Top five advantages of software as a service (SaaS) – Cloud computing news. Retrieved November 9, 2016, from

[3] Rouse, M. (2015, January). Platform as a Service (PaaS). Retrieved November 10, 2016, from

[4] IaaS – Infrastructure as a Service – Gartner IT Glossary. (2014). Retrieved November 10, 2016, from

[5] StateTechStaff. (2014, March 14). 5 Important Benefits of Infrastructure as a Service. Retrieved November 10, 2016, from

[6] Advantages and Disadvantages of Cloud Computing | LevelCloud. (n.d.). Retrieved November 10, 2016, from

[7] Seshachala, S. (2015, March 17). Disadvantages of Cloud Computing | Cloud Academy. Retrieved November 10, 2016, from

[8] Lukan, D. (2014, November 21). The top cloud computing threats and vulnerabilities in an enterprise environment. Retrieved November 10, 2016, from

[9] Seshachala, S. (2015, March 17). Disadvantages of Cloud Computing | Cloud Academy. Retrieved November 10, 2016, from

Undergrad is to Masters as Masters is to CICS

As Dr. Gillette has said in class, “high school is to undergrad as undergrad is to a masters.” I would like to add a little bit to that: undergrad is to a masters, as a masters is to a masters in CICS.

I won’t say that CICS is the greatest Master’s program in the nation; I simply don’t have that level of expertise on the subject. I do have a previous Master’s degree from Ball State and I CAN say that the CICS degree ranks well above my previous program. The level at which the CICS program prepares graduates for success in the work place I believe is unparalleled here at the University. Just yesterday I had a professor in the Marketing department tell me that he believed that the CICS program is the best Master’s program here at Ball State.

I understand that different programs have different focuses on what they are expecting from their graduates, but CICS puts a large focus on preparing graduates for professional success. They have built a large network of connected alumni, evident from the amount of interests that employers show when it comes to hiring new graduates. I feel that my previous Master’s degree dropped the ball when it came to connecting graduates with prospective employers. I was in the second cohort to take the Digital Storytelling program and I’m sure that the program has improved in that department based on what I’ve heard from more recent graduates.

The Center’s focus on its core values also adds value to the degree that graduates earn. Creativity, Integrity, Communication, and Service are values that not only apply to the work done in the Center, but to all the work we will do in our professional lives. Employers are going to want employees that can creatively problem solve, take great pride in their work and conduct themselves responsibly with professionalism, value communication and understand it’s key role in our work, and employees that feel that giving back by adding value to the community.

I can’t speak for all the programs throughout the United States, but the focus of the Center to create leaders with professional competency and integrity puts this degree above the majority of other degrees from other programs around the country. The Center helps graduates learn these values by immersing candidates in an intensive and group-oriented curriculum that intensifies and pressurizes the learning environment. All diamonds are created under great amounts of pressure.

Stealing Science

We have all been taught throughout our extensive schooling that cheating is bad, horrible, unacceptable, possibly the worst act you could commit as a student. The penalty for cheating would be the most severe. Turning in the worst paper in the class would yield you more points than the person who got caught cheating. Cheaters would be called out, singled out, made an example of. You DIDN’T want to get caught cheating, you didn’t even want to think about taking the risk with a penalty that severe. Cheating could be a penalty that could possibly bring your educational endeavors to a sudden end.

Plagiarism: the practice of taking someone else’s work or ideas and passing them off as one’s own (Google, 2016)

How does plagiarism play into cheating? I have always been taught that is synonymous with cheating. The origins of the word come from Latin word plagiarius, meaning kidnapper. This means that plagiarizing work is the same as kidnapping it, and we all know that kidnapping is a crime that also has the severest of punishments when caught. The value of one’s work or ideas has always been regarded to the level in which people value their own children. Work and ideas is what everything is built on, without the ability to protect these invaluable things, there would be no way to secure one’s future. What value would work and ideas have if there was no way to secure its ownership.

Plagiarism in science is not always as black and white as it might sound. If a scientist is working on an idea and runs into problems, is it wrong for another scientist to take over, introducing their own ideas? At what point does one person’s work become the work of another? To what degree do scientists need to cite previous work done by others? At what point does a scientist have the right to step in and claim plagiarism? What proves that an idea was the original work of a particular scientist? Would progress in science ever be made if people were not able to pull from other’s work? See, not so black and white.

History has many examples of instance of plagiarism that have had influences on the answers to the questions I’ve asked above. I’m curious how the idea of power and control influence the perception of plagiarism. At the end of World War 2, all the members of the Allied powers kidnapped as much of the research and development that the Germans had done in their advancement of their tools of war. Designs, prototypes, fully functional weapons, and even the scientists themselves were taken. Were these examples of plagiarism? Were the originators of these ideas given full credit for their work? Or did the perception of power influence peoples’ judgement of right and wrong?

Hitler himself coined the term assault rifle when he was shown the prototype of what was to become the world’s first intermediate caliber automatic rifle named the Sturmgewehr (German for storm rifle, as in, storm or assault the castle). A weapon designer Mikhail Kalashnikov took inspiration from captured German Sturmgewehrs when he designed his rifle that began trials in 1947, becoming the infamous AK-47, the Avtomat Kalashnikova, or automatic Kalasnikov, bearing his name and the year of introduction. Over 75 million AK-47s, and variants, have been produced and have seen, and will continue to see service across the globe in hundreds of countries. Even the flag of Mozambique has the outline of an AK47 on it. There are arguments out there that claim that the AK47 was an instance of plagiarism.

There are many more examples of technologies stolen from the Germans that have had huge impacts on the world. Things like the jet engine, moon landings, and nuclear weapons and energy might not have been possible without the possible plagiarizing of Nazi Germany.

Wearable Technology: Connectivity & Implementation (ITERA 2016 Conference)

For my first blog post, I wanted to share with my fellow classmates my section from the paper that my group wrote for ICS 620 on Wearable Technology. My section focused on GPS and other wearable military technologies. I’ve always been interested in the military and military technology since a very young age, always wanting to me a helicopter pilot in the Army. My father spent over 20 years in the National Guard and he was a big inspiration to me and I always thought I would follow in his footsteps and join.

I wasn’t ready to join after graduating from High School and my mother talked me into coming to Ball State to study Telecommunications. My parents knew that I wanted to join the service, but they also wanted me to be one of the first Doubs to graduate from college. Had I know that 9/11 was going to happen in the middle of my Junior year of college and our nation would be thrust into war, I might have reconsidered my choice to no join the service, that is a story for another day.

One of the reasons I want to share this work with the class is because we submitted the final paper to ITERA and we were chosen to present at the 2016 ITERA Conference in Louisville. Presenting at the conference was a valuable experience. Not only was I able to stand in front of my peers and present research that is exciting to me, I was able to attend other presentations and get a better idea of other research being down in programs similar to the one here in CICS. It was also a great opportunity to do a little networking and get a little feel for the potential job market that I will find myself entering upon graduation.

I’m just posting my section from the paper, but if anyone is interested in reading the entire paper, please don’t hesitate to leave a comment or drop me an email. Here is my section from the research paper titled:

Wearable Technology: Connectivity & Implementation


As technology becomes faster and smaller, armed forces around the globe are shifting their focus away from vehicle-mounted equipment to soldier-mounted equipment. Technologies that are already battle-proven are being miniaturized and deployed on the battlefield by individual soldiers. It won’t be long before a soldier’s uniform itself is used for more than just camouflage;it will become an integral part of how a soldier functions in battle (Roncone, 2004). There are a handful of platforms currently deployed by some nations, and even more being developed, examined, and tested for future conflicts.

“Where am I, where are my friends, and where is the enemy?” are questions that every soldier has asked themselves in every battle since armies have taken to the field. Previous generations of soldiers have been forced to rely on the accuracy of paper maps and the use of compasses and protractors to mark their position, the position of friendly forces, and the reported position of the enemy. Another key element to this outdated system is availability and quickness of communication between observers, commanders, and troops in the field to maintain consistent coordination between forces and the enemy. As battlefields began to expand across entire nations and armies became more mobile and started taking to the air, the demand for immediate communication of positions began to grow exponentially (“Battlelab: Assessing Digitisation,” 2015).



Where Am I?: Global Positioning System

Currently, the question of “Where am I?” is being answered by the Global Positioning System, or GPS. Initially, GPS was developed by the United State’s Department of Defense (DoD) to track military personnel, vehicles, aircraft, and munitions and has since been opened up to the civilian market where the most significant developments have taken place over the past 20 years (Moore, 1994). The predecessor to GPS was a system called the Navy Navigation Satellite System (NNSS), also called TRANSIT. It was initially implemented in the 60’s, but it had two major flaws: it suffered from large time gaps in coverage between satellite passes and it had problems with being relatively inaccurate (Wellenhof & Lichtenegger, 1997).

The current GPS systems in place is called NAVSTAR and was implemented in the late 70’s. NAVSTAR didn’t reach “Initial Operational Capability” until July 1993, and it didn’t reach its “Full Operational Capability” until April 27, 1995 when all 24 satellites were successfully placed in their correct orbits (Jones, Sutherland, & Tryfonas, 2008). There were several proposed orbit schemes suggested, and it was decided that it was most cost effective to have 24 evenly spaced satellites placed in 12-hour orbits to provide constant global positioning capability. For an accurate position to be determined, a GPS receiver on earth needs line-of-sight to a minimum of 4 satellites. There are usually more than 4 satellites visible; at times, there canbe upwards of 10 satellites visible and in these narrow windows, more accurate surveys can be conducted (Wellenhof & Lichtenegger, 1997).

There are three main components to the GPS system: the Space Segment, the Control Segment, and the User Segment. The United States Air Force Command, along with outside contractors, control the first two components. The User Segment was originally limited to military use, but was opened up the private sector in the 90’s. When it initially was made available for civil use, the United States military encrypted the precise signal to limit the accuracy of commercial GPS units in an attempt to keep the signals from being utilized by opposing militaries. In May of 2000, the United States stopped encrypting the signal, opening up exact GPS accuracy by the consumer market (Michalski, 2004)

The first segment, the Space Segment, consists of the GPS satellites in orbit that are arranged into a moving constellation. The layout of the constellation helps insure that the minimum four satellites can be seen by at the same time by a single GPS receiver on Earth. The second segment, or Control Segment, is a series of ground stations that communicate back and forth with the satellites and other ground stations to monitor and control the satellites in orbit. The last segment is the User Segment. This segment includes all of the GPS devices that are passively receiving signals from the orbiting satellites (Jones et al., 2008).

The initial user of the GPS system was the military; they envisioned that every single ship, aircraft, tank, jeep, and soldier would have a GPS receiver to help coordinate military activities. They also determined that by using four different antennas spread out over set distances, they could determine the pitch, roll, yaw, and position of a ship or aircraft (Wellenhof & Lichtenegger, 1997).

While there are multiple manufacturers of both military and civilian GPS receivers, there are several key features that all GPS receivers have in common. First of all, they must have basic computer components: a Central Processing Unit (CPU), Random Access Memory (RAM), and and some type of Storage Memory. The receiver also must have radio equipment that can receive and distinguish signals from multiple satellites while maintaining the ability to filter out noise. A screen to display output from the CPU is also essential in order to process the incoming signal and access the stored data, there must be an operating system (OS) (Jones et al., 2008). As technology has advanced and computers have become miniaturized, it has become possible for anyone with a handheld device to know their exact position at any given second.

All GPS receivers perform the same three tasks: they collect and amplify the low-power signal being broadcasted by the satellites, they measure the signals, and then they compute position, velocity, and time (PVT) based on collected information. For the receiver to accurately compute an exact position, it needs to calculate and measure several different variables. It first computes the exact location in space of each of the satellites by the signal that satellite is transmitting; it then measures the travel time it takes to receive the signals from each satellite, and then accounts for delays in travel time caused by earth’s atmosphere. Once all of the required information is collected and computed, the receiver will display an exact position on earth (GPS: The First, 2007).

The signal being broadcasted by the satellite is called pseudo-random number code, or PRN. The PRN code has three responsibilities: it needs to uniquely identify each satellite, provide the crucial timing information, and be able to be amplified so the GPS receiver does require a large satellite dish in order to receive all of the needed information. The PRN code is a binary-based code that repeats itself every millisecond, and on each repeat, a unique sequential identifier is added to the code. The ground control stations insure that all of the satellites broadcasting are in sync with each other, guaranteeing perfect timing within the satellite constellation. The GPS receiver has been programmed to also predict the timing of the PRN code; it then calculates the time between when the signal was scheduled to be broadcasted and the time when the signal was received by the receiver. Once the travel time has been determined between at least four different satellites, the receiver can then calculate its exact location (GPS: The First, 2007).

It has become clear that GPS has quickly become an integral part of how the United States, and other nations around the world, conduct military operations. Precise location information dramatically increases military effectiveness, reducing the number of missions required to accomplish objectives, and reduces the potential for unintended collateral damage. With GPS being incorporated into every single piece of the military puzzle, it also increases the reliance on the system, thus increasing the focus that must be placed on the potential vulnerability of the system. The Heritage Foundation’s report “Defending the American Homeland” listed designating GPS frequencies and network as critical national infrastructure as number two on the list of top priorities in defending the nation’s infrastructure (Simonsen, Suycott, Crumplar, & Wohlfiel, 2004).



Where are my Friends?: Battle Management System

With GPS solving the “Where am I?” question, it’s time to examine how the soldier of today answers the “Where are my friends?” question. Simply put, the answer to this question lies in combining the GPS system with the wireless communications system so that the location of every single combat element can be communicated amongst them. With GPS alone, a soldier would still communicate his position by radio which would have to be tracked by hand on a map, increasing the need for more communication of troop movements and the focus on accuracy. Northrop Grumman’s Force XXI Battle Command Brigade and Below (FBCB2) was the first Battle Management System (BMS) to incorporate GPS transponders mounted on vehicles to communicate their position to all units in their radio network automatically. This self-forming “tactical network” was the first time soldiers and commanders could see exactly where they were relative to everyone else in the element. The soldiers were able to see on computer screens the location of friendly vehicles and monitor their movements. In January of 2001, the 4th US Infantry Division (4ID-Mech) was declared the First Digitized Division (“Battlelab: Assessing Digitization,” 2015). As BMS systems continue to be implemented across the globe, entire armies are becoming completely digitized.

This digitization of the battlespace has become an integral part of communicating both Situational Awareness (AS) and Command & Control (C2) information amongst all the units in a dispersed and dynamic battlefield (Chevli et al., 2006). There are two aspects of digitization that remain key to its continued implementation: the existence of a reliable network that has the sufficient bandwidth to handle the transmission of all the data and the need for a common set of applications and systems that contain common formats and protocols to allow proper transmission between all nodes on the network (“Battlelab: Assessing Digitization,” 2015).

The original FBCB2 system relied on line-of-sight (LOS) radios to communicate GPS information between vehicles. Early deployments of FBCB2 into Kosovo revealed that difficult terrain, paired with the wide spread of a limited number of vehicles, made it nearly impossible to rely solely on LOS communications (Baddeley, 2005). Military planners quickly realized the need to integrate beyond line-of-sight (BLOS) communications into the current LOS-based BMSs. Blue Force Tracking (BFT) has become the answer to that problem.

BFT combines LOS systems, such as the Enhanced Position Location and Reporting system (EPLRS) and Single Channel Ground to Air Radio System (SINCGARS) operating on VHF and UHF frequencies with the BLOS MT-2011 satellite transceiver system. The signals propagated by BFT devices are transmitted through a commercial L-band satellite to a ground station. The ground station then relays the signals to the Network Operations Center (NOC) via either SATCOM or land lines. The NOC is responsible to manage the flow of data between BFT devices that either require LOS or BLOS connections to complete the transmission (Chevli et al., 2006). The FBCB2-BFT system delivers both tactical- and operational-level information that includes the positions of friendly units in relation to each other (Bryant & Smith, 2013). The strength of FBCB2-BFT relies more on its ability to effectively communicate between the systems mounted in vehicles or carried by soldiers and less on the effectiveness of the software or the power of the computers running it (Baddeley, 2005).

A limited number of units were equipped and deployed with early versions of FBCB2-BFT previous to being deployed to Iraq in 2002. After-action reports coming in from the field were showing that the FBCB2-BFT system was providing significant increases in the speed with which commanders could make tactical decisions and with a far greater degree of certainty. The reports also showed that controlled troop movements could continue even when visibility of the units on the ground at be reduced to 0 meters by sandstorms. The system also provided a common operating picture (COP) where everyone on the system could see the same information. According to the author of “Battlelab: Assessing Digitisation on 21st Century Battlefields,” the US Army’s Tactics, Techniques, and Procedures (TTP) publication for mechanized infantry operating from the M2 Bradley Infantry Fighting Vehicle (IFV) effectively summarizes the advantages and disadvantages of having a common operating picture:

An accurate and current common operational picture is a key tool for the platoon and squad leaders. It identifies friendly locations, suspected or confirmed enemy positions, obstacles, and other information vital to the success of a mission. The same common operational picture is displayed to subordinates, superiors, and adjacent units. However, platoon and squad leaders have to understand that the common operational picture is only as accurate as the data fed into it. It might not identify all enemy positions or, especially, friendly units that are not equipped with the FBCB2. (p. 3)



Where is the Enemy?

“Where is the enemy?” is a question that becomes harder to answer as military tactics and technology changes. In previous centuries, war was conducted on a single battlefield, between two opposing sides, marching in rows, meeting in the middle to conduct combat. Over time, warfare has evolved into what we see today: cities being turned into battlefields, enemies hidden among allies and civilians, and munitions being delivered from the air, directed from another continent. Advancements in technology are making it easier to see and identify the enemy, as well as track and communicate their position to other friendly forces. The ability of computers, GPS equipment, and global communications systems that work in unison to provide real-time information to soldiers and commanders has greatly improved both the situational awareness and combat effectiveness of militaries around the world. Soldiers are now able to move more freely through the battlespace knowing the location of friendly or enemy forces (Jones-Bonbrest, 2012).

Just as the Global Positioning System solved question number one and then aided in solving question number two, the combined FBCB2-BFT BMS is not only the answer to question two, it’s also a large part of the answer to question number three. Once the positions of friendly forces have been collected and distributed, it is now possible to plot suspected or confirmed enemy locations onto the same battle map being used to view friendly forces. The system has been engineered to display the exact location of friendly forces in blue and the locations of enemy forces or improvised explosive devices (IEDs) in red (Jones-Bonbrest, 2012).

The individual soldier is able to designate enemy targets through the use of a Multi-Function Laser (MFL). Usually mounted on the soldier’s weapon system, the MFL transmits the distance, elevation, and the direction of the target to the BMS. Since the FBCB2-BFT is already tracking the position of the friendly soldier through the GPS system, the BMS can then pinpoint the exact location of the enemy target and transmit that information to the rest of the battle force. The biggest advantage of the MFL is that it communicates the enemy position without the need for the soldier to use more traditional communication channels that would require more movement or the use voice that could give away his position (Fitzgerald, 2007). The other advantage of a weapon-mounted MFL being carried by every soldier on the battlefield is that it eliminates the need for a Joint Terminal Attack Controller (JTAC) to be deployed with the combat element, increasing the size, and sometimes limiting the mobility of the squad. A JTAC is usually not a member of the Special Forces community, but instead member of the United States Air Force and his sole responsibility is to mark targets and communicate with the Air Force to provide air support for the operation. Giving every soldier the ability to perform the task of a JTAC reduces the number of soldiers required to perform mission-critical tasks and increases the lethality and mobility of the entire squad (“SWaP Shop: Future,” 2015).

Along with communicating friendly and enemy positions, the system is also utilized to detect and track other threats that may be present on the battlefield. The Adaptable GIS Multi-threat Detection System (AGMDS) is used to detect, track, and display chemical, biological, radioactive, and explosive threats. It uses a collection of state of the art sensors to detect chemical vapors, biological aerosols, and radiological threats and then plots them on the battlefield map based on the GPS location of the sensors that detected them. As the sensors move around the battlefield, they continue to collect, update, and transmit data through the self-organized wireless network back to the control system. The control system processes all the data streaming in from different nodes and then factors in changes in battlefield conditions, such as wind and terrain.It then sends a more accurate picture of the potential threat on a GIS layer of the battlefield map to the soldiers on the ground so that forces can evade the threats as needed. The AGMDS system also collects soldiers’ physiological data through Zephry BioHarnesses and wireless video surveillance camera mounted on vehicles or soldier’s helmets to further analyze the situations as they unfold (Mcclintock, Saxon, Forsythe, Rascoe, & Risser, 2011).



Future Force Warrior

As network-centric technology is being integrated into armed forces around the globe, the focus has been shifting from “big-ticket systems such as fighter aircraft, warships, and armoured vehicles” to systems that are to be fielded by individual dismounted soldiers. With warfare shifting from the open battlefield to more urban environments, it has become necessary to upgrade the capabilities of the individual infantryman to meet the challenges of dismounted operations (Weichong, 2009). One of the biggest challenges in developing a Future Force Warrior (FFW) system has been difficulty in designing a single system that will be effective in completing a wide variety of combat tasks while also meeting size, weight, and power constraints that will not overburden the load on a dismounted soldier (“SWaP Shop: Future,” 2015).

There are several technologies are being shared throughout  different FFW systems  currently being developed . The first, and possibly the most important element, is the integration of a BFM system, such as FBCB2-BFT, into the FFW system. To do this, each soldier must carry his own tactical network connected computer, radio and satellite communication equipment, and a GPS receiver. The first attempts were systems that would take commercial-grade off-the-shelf smartphones, tablets, and laptops, wipe their memory clean, install BFM software, and then strap them to the front or rear of a soldier’s vest. This created two distinct problems: the commercial-grade hardware would not hold up well in extreme battlefield conditions and soldiers had trouble managing all the cables required to connect their computer to all their peripheral electronic devices, causing snags (Keller, 2013).

To solve the first problem, engineers looked to companies like Quantum3D in San Jose, California to supply a purposed-built, ruggedized tactical visual computers (TVC), called Thermite. Thermite is a lightweight, ruggedized, sealed computer that is currently in use by the United States Army, Air Force, and Navy for a wide variety of tasks, including command and control, communications, intelligence, surveillance, unmanned-aerial-vehicles (UAVs), and real-time embedded training applications (Singer, 2015). The Thermite acts as the soldier’s central communication and processing hub for Command, Control, Communications, Computers, Intelligence, Surveillance and Reconnaissance (C4ISR) along with navigation, radio communications, live video display, and other mission-critical information (McHale, 2007).

The development of e-Textiles is helping to solve the second problem. In previous systems, the materials used to construct the soldier’s uniform and load bearing equipment (LBE), were strictly passive and intended only to carry gear, provide protection from the elements, and provide camouflage. As FFW systems started adding more and more electronic devices that each required their own batteries, military textile manufactures started integrating power and data distribution via USB2.0 technology inside the fabric in order to centralizes the power source and eliminate excess cables and the need for soldiers to carry a supply of batteries (“SWaP Shop: Future,” 2015). The use of plastic optical fiber in e-textiles gives the added benefits of higher bandwidth, protection against electromagnetic interference (EMI), is more resistant to nicks, and can be repaired easily by melting back together the fiber in the field. Antennas are also able to be sewn into the LBE to help reduce the soldiers signature and increase their mobility (Winterhalter et al., 2005).


Field Study- FELIN

There is only one FFW system that can be considered in full production: FELIN (Valpolini, 2012). The French Army’s Fantassin àÉquipmenent et Liaisons INtégrés (Infantryman with Integrated Equipment and Links) FELIN soldier can be described as a digital integrated suite that was designed to enhance the dismounted soldier’s capabilities in terms of “precision, day/night combat, intelligence, and individual and collective self-protection.” 17 French Army regiments have been equipped with the FELIN system so far, with a total of 18,552 FELIN systems to be in the field by 2019. The system is based around the French Army’s SitComDé (Système d’Information Terminal- Combattant Débarqué) battle management system. SitComDé combines both geolocation awareness and Blue Force Tracking with real-time video streams of infrared images being transmitted by optical gunsights and other optical sensors include multifunction binoculars (“SWaP Shop: Future,” 2015).

One of FELIN’s primary advantages is its modular architecture. Soldiers are able to adapt the system to handle alternative communication solutions and swap between operational software that can be tailored to different missions or for different individual assignments. The vest is based around a “modular pocket concept” where all the subsystems (helmet, sights, weapons, radios, sensors) are able to be interconnected through wires sewn into the pockets (Valpolini, 2012).

Another advantage of FELIN would be the increased situational awareness that is given to the soldier through the soldier’s helmet mounted display (HMD). The FELIN equipped soldier is able to see information coming in from the SitComDé BMS, images being captured by the night vision camera on the soldier’s helmet, and the video being captured by the camera built into the the optics on the soldier’s weapon system. The soldier is then able to send video directly to other soldiers in the field or the commanders back at base through the SitComDé. They are also able to use the camera mounted on their weapon to see around corners and accurately engage targets without exposing themselves to incoming fire. Instead of using microphones mounted to a headset, voice commands are fed into the system through an ostephone inside the helmet that uses bone vibrations instead of sound waves. This system has been found to be more effective in noisy environments, perfect for the battlefield. The soldier’s weapon has also been modified with a push-button panel on the stock to allow the user to switch between sighting options and communication systems without having to take their weapon off target (Curlier, 2004).

The first live-fire training conducted with the French FELIN system was done at the Otterburn training ground outside of Newcastle upon Tyne, UK. There, the British and French Armed Forces conducted live-fire exercises between the 5th Battalion of the Royal Regiment of Scotland (5 SCOTS) and the FELIN-equipped 8e Régiment de parachutistes d’infanterie de marine (8e RPIMa), part of the French Army’s 11e Brigade parachutuste. During the five-day exercise, attacks were made both day and night, by both the visiting and hosting armies with the goal of preparing for future deployments to Afghanistan for both the French and English (Pengelley, 2012). The FELIN system impressed both sides, with the only complaints about the system being relatively heavy weight and relatively shorter battery life when being deployed in the field. Soldiers were quick to learn that they needed to pay close attention to energy management and shut down different parts of the system that were not in use (Forkert, 2012).



Field Study- TALOS

The US Special Operations Command (USSOCOM) is responsible for the “most adventurous future soldier technology currently in development,” the Tactical Light Operator Suit (TALOS). The purpose of the TALOS program is to provide maximum protection to Special Operations soldiers, while focusing on mobility and situational awareness. According to former USSOCOM boss, Admiral Bill McRaven, increasing the protection of operators that are storming buildings and compounds was to be the primary objective of the TALOS project (SWaP shop: Future, 2015).

The TALOS program began in 2013 in response to the number of Special Forces operators lost while storming buildings and compounds in Iraq and Afghanistan. Operators were blaming the majority of losses on incoming enemy fire directed towards the “Fatal Funnel” created when operators first encounter a choke point caused by a limited number of entry points. The first generations of TALOS aimed at providing ballistic protection against small arms fire, up to 7.62mm ammunition(caliber used by the AK-47 platform), and increased the coverage of the armor by upwards of 44% in comparison with currently armor options (White, 2014).

Along with improving ballistic protection, operators requested a host of integrated electronics to provide mission-critical C4ISTAR (Command, Control, Communications, Computers, Information/Intelligence, Surveillance, Targeting Acquisition and Reconnaissance) while operating in an dismounted capacity. To aid in this, TALOS designers have developed motorcycle-style combat helmet that allows for operators to clip on different mission modules to customize their suit for the mission at hand. The mission modules include sensors for Chemical, Biological, Radiological, Nuclear, and Explosives (CBRNE) and biometric recording and recognition, along with target acquisition and identification, navigation aids, and image intensification and thermal imaging. In a recent NATO study that examined future military operating environments in which the enemy is mostly comprised of unarmored and untrained guerrilla soldiers, it was determined that the greatest improvements to combat effectiveness will come from improved situational awareness and from an increase in the ability to acquire targets with weapon sights and optical sensors (“SWaP Shop: Future,” 2015).

Another module new to the battlefield is the Boomerang Warrior-X. Developed by the Raytheon Company, Boomerang is a soldier-mounted detection system that uses a wrist display that provides the soldier with the range and azimuth of incoming fire. Boomerang is also wired into the soldier’s communication gear to notify both the soldier and the BMS to a change in position of the threat as it moves and continues to fire. Boomerang uses an array of small microphones to detect and measure the muzzle blast and supersonic shock wave caused by incoming supersonic projectiles. Since each microphone in the array hears the sounds at different times, the system is able to tell the soldier where the fire originated (Prêt-à-porter: Military, 2015). The British have been working on a similar system called QinetiQ that is able to also identify the origin of incoming fire, but QinetiQ uses low-power Short Wave Infrared (SWIR) technology working between 900nm and 1,700nm to more accurately identify incoming fire out to greater ranges (“SWaP Shop: Future,” 2015).

With the increased weight in both armor and electronics, TALOS designers are looking at incorporating an exoskeleton to reduced the added strain on the soldier. The goal of the exoskeleton is to allow the soldier to walk greater distances, reduce the amount of fatigue caused by the extra weight, and to minimize the risk of injury to muscles and joints. TALOS engineers are examining Lockheed Martin’s unpowered, or passive,  FORTIS system for inspiration. As Donaldson observed, the FORTIS systems consists of a “stiff pelvic belt that transfers heavy loads to the ground through jointed legs that allow the wearer to walk normally” (Donaldson, 2014). Future versions of TALOS are expected to use a powered exoskeleton system that not only reduce the strain of the soldier, but will increase the wearer’s strength.

Along with the weight of increased armor, more electronics, and the incorporation of a powered exoskeleton, the bigger problem has become providing sufficient power for the entire system to operate over extended periods of time. Anthony Davis, the director of science and technology at USSOCOM, has figured that a powered exoskeleton, supporting 500-600 points of armor, electronics, and gear, would require 3-5 kilowatts of power to power the system for a 12-hour period and “currently, there is nothing available man-packable that can provide that kind of power” (Magnuson, 2015).