MA21_10-10_web-red-final

10 Breakthrough Technologies 2021 | MIT Technology Review

This list marks 20 years since we began compiling an annual selection of the year’s most important technologies. Some, such as mRNA vaccines, are already changing our lives, while others are still a few years off. Below, you’ll find a brief description along with a link to a feature article that probes each technology in detail. We hope you’ll enjoy and explore—taken together, we believe this list represents a glimpse into our collective future.


gene vaccine illo
SELMAN DESIGN

Messenger RNA vaccines

We got very lucky. The two most effective vaccines against the coronavirus are based on messenger RNA, a technology that has been in the works for 20 years. When the covid-19 pandemic began last January, scientists at several biotech companies were quick to turn to mRNA as a way to create potential vaccines; in late December 2020, at a time when more than 1.5 million had died from covid-19 worldwide, the vaccines were approved in the US, marking the beginning of the end of the pandemic.

The new covid vaccines are based on a technology never before used in therapeutics, and it could transform medicine, leading to vaccines against various infectious diseases, including malaria. And if this coronavirus keeps mutating, mRNA vaccines can be easily and quickly modified. Messenger RNA also holds great promise as the basis for cheap gene fixes to sickle-cell disease and HIV. Also in the works: using mRNA to help the body fight off cancers. Antonio Regalado explains the history and medical potential of the exciting new science of messenger RNA.

GPT-3

Large natural-language computer models that learn to write and speak are a big step toward AI that can better understand and interact with the world. GPT-3 is by far the largest—and most literate—to date. Trained on the text of thousands of books and most of the internet, GPT-3 can mimic human-written text with uncanny—and at times bizarre—realism, making it the most impressive language model yet produced using machine learning.

conceptual photograph of unicorns

SIERRA LENNY

But GPT-3 doesn’t understand what it’s writing, so sometimes the results are garbled and nonsensical. It takes an enormous amount of computation power, data, and money to train, creating a large carbon footprint and restricting the development of similar models to those labs with extraordinary resources. And since it is trained on text from the internet, which is filled with misinformation and prejudice, it often produces similarly biased passages. Will Douglas Heaven shows off a sample of GPT-3’s clever writing and explains why some are ambivalent about its achievements.


conceptual photography of a person holding a phone
SIERRA & LENNY

TikTok recommendation algorithms

Since its launch in China in 2016, TikTok has become one of the world’s fastest-growing social networks. It’s been downloaded billions of times and attracted hundreds of millions of users. Why? Because the algorithms that power TikTok’s “For You” feed have changed the way people become famous online.

While other platforms are geared more toward highlighting content with mass appeal, TikTok’s algorithms seem just as likely to pluck a new creator out of obscurity as they are to feature a known star. And they’re particularly adept at feeding relevant content to niche communities of users who share a particular interest or identity.

The ability of new creators to get a lot of views very quickly—and the ease with which users can discover so many kinds of content—have contributed to the app’s stunning growth. Other social media companies are now scrambling to reproduce these features on their own apps. Abby Ohlheiser profiles a TikTok creator who was surprised by her own success on the platform.


solid state lithium battery

Lithium-metal batteries

Electric vehicles come with a tough sales pitch; they’re relatively expensive, and you can drive them only a few hundred miles before they need to recharge—which takes far longer than stopping for gas. All these drawbacks have to do with the limitations of lithium-ion batteries. A well-funded Silicon Valley startup now says it has a battery that will make electric vehicles far more palatable for the mass consumer.

It’s called a lithium-metal battery and is being developed by QuantumScape. According to early test results, the battery could boost the range of an EV by 80% and can be rapidly recharged. The startup has a deal with VW, which says it will be selling EVs with the new type of battery by 2025.

The battery is still just a prototype that’s much smaller than one needed for a car. But if QuantumScape and others working on lithium-metal batteries succeed, it could finally make EVs attractive to millions of consumers. James Temple describes how a lithium-metal battery works, and why scientists are so excited by recent results.


conceptual illustration
FRANZISKA BARCZYK

Data trusts

Technology companies have proven to be poor stewards of our personal data. Our information has been leaked, hacked, and sold and resold more times than most of us can count. Maybe the problem isn’t with us, but with the model of privacy to which we’ve long adhered—one in which we, as individuals, are primarily responsible for managing and protecting our own privacy.

Data trusts offer one alternative approach that some governments are starting to explore. A data trust is a legal entity that collects and manages people’s personal data on their behalf. Though the structure and function of these trusts are still being defined, and many questions remain, data trusts are notable for offering a potential solution to long-standing problems in privacy and security. Anouk Ruhaak describes the powerful potential of this model and a few early examples that show its promise.


Green hydrogen

Hydrogen has always been an intriguing possible replacement for fossil fuels. It burns cleanly, emitting no carbon dioxide; it’s energy dense, so it’s a good way to store power from on-and-off renewable sources; and you can make liquid synthetic fuels that are drop-in replacements for gasoline or diesel. But most hydrogen up to now has been made from natural gas; the process is dirty and energy intensive.

The rapidly dropping cost of solar and wind power means green hydrogen is now cheap enough to be practical. Simply zap water with electricity, and presto, you’ve got hydrogen. Europe is leading the way, beginning to build the needed infrastructure. Peter Fairley argues that such projects are just a first step to an envisioned global network of electrolysis plants that run on solar and wind power, churning out clean hydrogen.

contact tracing illo
FRANZISKA BARCZYK

Digital contact tracing

As the coronavirus began to spread around the world, it felt at first as if digital contact tracing might help us. Smartphone apps could use GPS or Bluetooth to create a log of people who had recently crossed paths. If one of them later tested positive for covid, that person could enter the result into the app, and it would alert others who might have been exposed.

But digital contact tracing largely failed to make much impact on the virus’s spread. Apple and Google quickly pushed out features like exposure notifications to many smartphones, but public health officials struggled to persuade residents to use them. The lessons we learn from this pandemic could not only help us prepare for the next pandemic but also carry over to other areas of health care. Lindsay Muscato explores why digital contact tracing failed to slow covid-19 and offers ways we can do better next time.


SELMAN DESIGN

Hyper-accurate positioning

We all use GPS every day; it has transformed our lives and many of our businesses. But while today’s GPS is accurate to within 5 to 10 meters, new hyper-accurate positioning technologies have accuracies within a few centimeters or millimeters. That’s opening up new possibilities, from landslide warnings to delivery robots and self-driving cars that can safely navigate streets.

China’s BeiDou (Big Dipper) global navigation system was completed in June 2020 and is part of what’s making all this possible. It provides positioning accuracy of 1.5 to two meters to anyone in the world. Using ground-based augmentation, it can get down to millimeter-level accuracy. Meanwhile, GPS, which has been around since the early 1990s, is getting an upgrade: four new satellites for GPS III launched in November and more are expected in orbit by 2023. Ling Xin reports on how the greatly increased accuracy of these systems is already proving useful.


conceptual photograph of remote work
SIERRA & LENNY

Remote everything

The covid pandemic forced the world to go remote. Getting that shift right has been especially critical in health care and education. Some places around the world have done a particularly good job at getting remote services in these two areas to work well for people.

Snapask, an online tutoring company, has more than 3.5 million users in nine Asian countries, and Byju’s, a learning app based in India, has seen the number of its users soar to nearly 70 million. Unfortunately, students in many other countries are still floundering with their online classes.

Meanwhile, telehealth efforts in Uganda and several other African countries have extended health care to millions during the pandemic. In a part of the world with a chronic lack of doctors, remote health care has been a life saver. Sandy Ong reports on the remarkable success of online learning in Asia and the spread of telemedicine in Africa.


multimodal
SELMAN DESIGN

Multi-skilled AI

Despite the immense progress in artificial intelligence in recent years, AI and robots are still dumb in many ways, especially when it comes to solving new problems or navigating unfamiliar environments. They lack the human ability, found even in young children, to learn how the world works and apply that general knowledge to new situations.

One promising approach to improving the skills of AI is to expand its senses; currently AI with computer vision or audio recognition can sense things but cannot “talk” about what it sees and hears using natural-language algorithms. But what if you combined these abilities in a single AI system? Might these systems begin to gain human-like intelligence? Might a robot that can see, feel, hear, and communicate be a more productive human assistant? Karen Hao explains how AIs with multiple senses will gain a greater understanding of the world around them, achieving a much more flexible intelligence.


For a look at what technologies made our 10 Breakthrough Technologies lists in previous years, check out this page, which starts with 2020’s list.

iStock-1060072590-scaled

10 Ways to Reduce IT Costs Quickly

CIOs can follow these 10 rules when faced with the need to cut IT budgets quickly.

It’s often said that “you can’t cut your way to growth,” but you can cut your way to survival. There are many reasons why an organization may need to make immediate spending cuts to survive, from natural disasters and terrorist attacks to a tanking economy or an aggressive new competitor — or a global pandemic.

“COVID-19 has transformed how people are spending their money, and many businesses such as airlines, cruiselines and cinemas simply have no choice but to cut costs,” said Chris Ganly, Senior Director Analyst, Gartner, during his presentation at virtual Gartner Symposium IT/Xpo® 2020.

Difficult times call for difficult actions

“When faced with the challenge of immediate cost savings, CIOs need to determine how to approach cost cutting in the least damaging way to the mid- and long-term health of the business,” said Ganly.

 

Gartner IT Symposium/ Xpo®

Objective insights, strategic advice and practical tools to help CIOs and IT executives achieve their most critical priorities

Learn More

 

Gartner recommends taking a structured and programmatic approach to cost optimization. Research shows that organizations that continue to invest strategically in tough times are more likely to emerge as leaders. But sometimes, difficult times call for difficult actions.

Cutting or stopping projects or services where costs have already been spent or incurred are of limited value. Cutting things that can’t be restarted, that have already been invested in or are ready to deliver will hurt when the organization is ready to accelerate again.

10 rules for rapid IT cost reduction

Assess your IT cost reduction options with these rules in mind. 

No. 1: Target immediate impact

Eliminate, reduce or suspend items that will impact in weeks or months, not in years. Examples include expenses that are incurred and paid monthly or quarterly on a “pay as you go” basis, rather than annually.

No. 2: Reduce, don’t freeze

Focus on costs that can truly be reduced or eliminated, not just frozen for the current period, only to reappear again further down the line.

No. 3: Cash is king

Target those items that will have a real cash impact on the profit and loss statement rather than noncash items like depreciation or amortization. For example, cost savings in cloud services have a real cash impact, as opposed to reducing on-premises software licenses or owned assets like hardware. Selling and leasing back assets can provide real cash savings as well.

No. 4: Plan to do it once

Most organizations don’t cut deeply enough the first time, which means they often need to revisit costs and do it again. This creates a destructive and unproductive cycle of uncertainty, effort and lost productivity. This is particularly relevant for staff cuts, where cycles of ongoing reductions can be especially dangerous.

The ten rules for rapid IT spend reduction.

No. 5: Carefully inspect accounts

Work with your finance partner to obtain a solid view of the expense-level detail, such as expense accounts, and key balance sheet accounts, including expense accruals and prepayments. Use this view to identify specific cash reductions that will immediately have an impact.

No. 6: Target unspent and uncommitted expenses

Unless payments (or commitments) can be recovered or prepayments returned, the most immediate impact will be on unspent or uncommitted payments. Evaluate contracts for renegotiation and termination clauses.

No. 7: Address capital

Typically, operating expenditures (opex) are the easiest to impact, but capital expenditures (capex) can also be reduced. Gartner IT Key Metrics Data shows that 25% of the average IT budget is spent on capital, so ensure that the complete range of IT spend is considered for rapid reductions.

No. 8: Sunk costs are irrelevant

When it comes to saving money, it is commonly said that “sunk costs are irrelevant,” meaning that future spend should be considered without relation to past spending or “sunk costs.” From a rapid cost reduction standpoint this is true, but it’s still worth considering whether the saving will be more than the benefit that can and will be delivered by continuing.

No. 9: Address discretionary and nondiscretionary cost

Discretionary spending, such as for new projects, additional capability or services, is often a seemingly easier place to cut. However, even nondiscretionary “run the business” expenses such as IT infrastructure and operations can be cut by reducing usage or service levels.

No. 10: Tackle both variable and fixed costs

Fixed costs are expenses that remain constant, regardless of activity or volume, such as office rent, subscriptions and payroll. For fixed costs, focus on elimination. Variable costs change with activity or volume; for example, telecommunications, contractors and consumables. For variable costs, focus on both reduction and elimination.

This article has been updated from the December 9, 2019 original to reflect new events, conditions and research.
Capturesds

Crafting an AI strategy for government leaders | Deloitte Insights

Does your agency have a holistic AI strategy?

​Artificial intelligence in all its forms can enable powerful public sector innovations in areas as diverse as national security, food safety, and health care—but agencies should have a holistic AI strategy in place.

A strategy is crucial for AI success

THE CITY of Chicago is using algorithms to try to prevent crimes before they happen. In Pittsburgh, traffic lights that use artificial intelligence (AI) have helped cut traffic times by 25 percent and idling times by 40 percent.1 Meanwhile, the European Union’s real-time early detection and alert system (RED) employs AI to counter terrorism, using natural language processing (NLP) to monitor and analyze social media conversations.2

Such examples illustrate how AI can improve government services. As it continues to be enhanced and deployed, AI can truly transform this arena, generating new insights and predictions, increasing speed and productivity, and creating entirely new approaches to citizen interactions. AI in all its forms can generate powerful new abilities in areas as diverse as national security, food safety, regulation, and health care.

But to fully realize these benefits, leaders must look at AI strategically and holistically. Many government organizations have only begun planning how to incorporate AI into their missions and technology. The decisions they make in the next three years could determine their success or failure well into the next decade, as AI technologies continue to evolve.

It will be a challenging period. But such transformations, affecting as they do all or most of the organization and its interactions, are never easy. The silver lining is that agencies have spent the last decade building their cloud, big data, and interoperability abilities, and that effort will help support this next wave of AI technology. AI-led transformation promises to open completely new horizons in process and performance, fundamentally changing how government delivers value to its citizens.

But this can’t be achieved simply by grafting AI onto existing organizations and processes. Maximizing its value will require an integrated series of decisions and actions. These decisions will involve complex choices: Which applications to prioritize? Which technologies to use? How to articulate AI’s value to the workforce? How can we manage AI projects? Should we use internal talent, external partners, or both?

This study outlines an integrated approach to an AI strategy that can help government decision-makers answer these questions, and begin the effort required to best meet their own needs.

Overlooking an AI strategy is risky

Without an overarching strategy, complex technology initiatives often drift; at best, they fix easy problems in siloed departments and at worst, they automate inefficiencies. To achieve real transformation across the organization and unlock new value, effective AI implementation requires a carefully considered strategy with an enterprisewide perspective.

But AI strategy is still pretty new—and for many organizations, nonexistent. According to a recent IDC survey, half of responding businesses believed artificial intelligence was a priority, but just 25 percent had a broad AI strategy in place. A quarter reported that up to half of their AI projects failed to meet their targets.3

And government is behind the private sector on the strategy curve. In a 2019 survey of more than 600 US federal AI decision-makers, 60 percent believed their leadership wasn’t aligned with the needs of their AI team. The most commonly cited roadblocks were limited resources and a lack of clear policies or direction from leaders.4

But a coherent AI strategy can attack these barriers while building a compelling case for funding. A winning plan establishes clear direction and policies that keep AI teams focused on outcomes that create significant impacts on the agency mission. Of course, strategy alone won’t realize all the benefits of AI; that will require adequate investments, a level of readiness, managerial commitment, and a lot of planning and hard work. But an effective AI strategy creates a foundation that promotes success.

What constitutes an effective AI strategy for government?

For some, an AI “strategy” is simply a statement of aspirations—but since the lack of planning frustrates and confuses implementation efforts, this definition is clearly inadequate. Other strategies treat AI purely as a technical challenge, with a narrow plan based on a handful of incidents. This limited focus risks missing opportunities and the organizational changes needed to produce a truly transformational impact on performance and mission.

An effective strategy should align technological choices with the overarching organizational vision and, drawing on lessons learned from past technology transformations, incorporate both technical and managerial perspectives.5 A holistic strategy, in turn, should support the broader agency strategy as well as federal goals for AI adoption. Because of the rapid evolution of AI, strategies also should be updated periodically to keep up with technological developments.6

What does this mean for government leaders charged with AI strategy? The scholar Michael Porter has observed that “The essence of strategy is choosing what not to do.”7 With so many different types of technology and potential opportunities, leaders with limited resources should decide carefully about the use of AI—and the choices can seem overwhelming.

In their book Playing to Win, A.G. Lafley and Roger Martin introduce a pragmatic framework that we build upon here.8 The strategic choice cascade (figure 1) is adapted here for government AI. It’s based on the premise that strategy isn’t just a declaration of intent, but ultimately should involve a set of choices that articulate where and how AI will be used to create value, and the resources, governance, and controls needed to do so.9

An integrated AI strategy considers technology and management choices

This approach has five core elements—five sets of critical choices—that, as a whole, comprise a clearly articulated strategy. The first choice an organization must make concerns vision, specifically its level of AI ambition and the related goals and aspirations. With that vision as a guide, the next two elements concern choosing where to focus AI attention and investment, in terms of problem areas, mission demands, services and technologies, and how to create value in those areas, including an approach to piloting and scaling. The final two choices in the cascade concern the capabilities and management systems required to realize the specified value in focus areas. They answer questions such as, “What culture and capabilities should be in place?” including personnel, partners, data, and platforms; and “What management systems are required?” including performance measures, change management, governance, and data management.10

An integrated and holistic strategy

These five sets of choices link together and reinforce one other,11 like the DNA’s double helix, with entwined strands including technology as well as managerial and organizational choices. They ensure that the agency’s AI goals are clearly linked to business outcomes and identify the critical activities and interconnections that can help achieve success. Figure 1 illustrates some of the individual choices at each stage through the twin lenses of management and technology.

You need both strands to capture the full value of AI. Without understanding the technology and its potential, decision-makers can’t identify transformative applications; without a managerial perspective, technological staff can fail to identify and address the inevitable change-management challenges. Regardless of their role within the organization, however, government planners must sift through an assortment of potential issues, including changing workforce roles, as well as the data security and ethical concerns that arise when machines make decisions previously made by people.

And these issues are amplified when an AI program’s goal is not just incremental but dramatic improvement.

The five elements of a winning AI strategy

Let’s take a closer look at each element of the strategic choice cascade and see how government agencies are using them in their AI strategies.

Vision: What is our level of AI ambition?

“ Transform the Department of Energy into a world-leading AI enterprise by accelerating development, delivery, and adoption of AI.”—Artificial Intelligence and Technology Office, US Department of Energy12

In 2019, the US Department of Energy (DoE) issued this statement of its AI goals and aspirations to transform its operations in line with national strategy. The US Office of the Director of National Intelligence issued a similar vision statement, declaring that it will use AI technologies to secure and maintain a strategic competitive advantage for the intelligence community.13 Both examples represent a high level of AI ambition, directed toward broad, mission-focused transformation.

Other agencies may have less ambitious goals; they may wish to use AI to address a particular, long-standing problem, to redesign a specific process, to free up staff, increase productivity, or improve customer interactions. But while government AI strategies can have multiple objectives and ambitions, all of them must consider the well-being of people and society. For example, AI-based intelligent automation can help make decisions for both simple and complex actions. But agencies that contemplate delegating more decision-making to machines must be able to understand and measure the resulting risks and social impacts, as well as the benefits. Thus, communicating and socializing the AI strategy with citizens is key to gaining wider acceptance for AI in government.

For this reason and others, the organization’s goals, aspirations, or requirements regarding the ethics of AI should be involved in this and at all levels of the cascade.

Regardless of the ambition, listing aspirations and goals is only a first step. As the vision crystallizes, the next steps—the identification of opportunities and execution requirements—come into focus. As the DoD writes in its AI strategy, “Realizing this vision requires identifying appropriate use cases for AI across the DoD, rapidly piloting solutions, and scaling successes across the enterprise.”

Vision: Strategic choice questions to address

What specific goals, aspirations, and requirements do we have for AI?

Which elements of our larger strategy and goals will AI support?

What’s the long-term ambition behind our investment in AI?

How will our organizational values help us deal with questions of ethics, privacy, and transparency?

Will AI produce savings or some other positive outcome to justify the investment?

Focus: Where should we concentrate our AI investments?

“ The Department of Defense aims to apply AI to key mission areas, including:

  • Improving situational awareness and decision-making with tools, such as imagery analysis, that can help commanders meet mission objectives while minimizing risks to deployed forces and civilians.
  • Streamlining business processes by reducing the time spent on common, highly manual tasks and reallocating DOD resources to higher-value activities.”

—US Department of Defense AI strategy14

AI-based applications can reduce backlogs, cut costs, stretch resources, free workers from mundane tasks, improve the accuracy of projections, and bring intelligence to scores of processes, systems, and uses.15 A variety of solutions are available; the important question is the choice of problems and opportunities.

Which applications and problems should we tackle?

The DoE focuses its technology-based initiatives in this way:

The mission of the Energy Department is to ensure America’s security and prosperity by addressing its energy, environmental and nuclear challenges through transformative science and technology solutions.16

Its budget priorities reflect this commitment. Much of DoE’s planned US$20 million for research and development in AI and machine learning will go toward two goals: more secure and resilient power grid operation and management, and research toward transformative scientific solutions.17

Specific statements of intent provide “guardrails” for DoE decision-makers, keeping them focused on priority needs. Each choice involved may be explained and championed separately, but, in practice, each choice must align with and reinforce others. In this case, DoE’s “where to focus” choice was reinforced by its “management systems required” decision to launch the Artificial Intelligence and Technology Office in September 2019. The organization will focus on accelerating and coordinating AI delivery and scaling it across the department.18

Back office, customer engagement, mission focus—or all three?

Leaders deciding where to focus their AI efforts can consider the question through several different lenses. One might consider which problems to address; what processes to focus on; or what part of the organization would benefit most from the investment. There’s no one “best way,” and in fact looking through multiple lenses can be helpful.

Figure 2 illustrates potential AI applications in human services, with examples from all three lenses. In human services, AI has considerable potential to transform processes for caseworkers and administrators, improve service, and enhance mission outcomes. Policymakers will have to determine how to prioritize advanced technological solutions. Should AI be used to automate case documentation and reporting? To create chatbots to answer queries? To compare our beneficiary programs with those of other organizations, to better identify the optimal mix of services? Or should we strive for all three goals?

A broad initial view can help government agencies find the right blend of uses. It can help them avoid becoming focused on a single class of applications—simple automation of back-office tasks, for example—at the expense of more transformative opportunities.

Every agency faces a similar set of questions about where to focus its activities, and how to balance quick solutions with those promising longer-term transformation. A carefully articulated AI strategy that clearly establishes priority focus areas should answer these questions. To do so, it must consider both managerial and technological perspectives.

Sample AI government use cases in human services

The technology perspective

The technological focus should be determined by the mission need. Broadly, the pool of available technologies roughly corresponds with each of the bubbles in figure 2. Intelligent automation tools, also called robotic process automation, are highly relevant to back-office functions. Customer interaction or engagement technologies bring AI directly to the customers, whether they’re citizens, employees, or other stakeholders. Finally, a variety of AI insight tools can identify patterns or develop predictions that can be highly relevant to many agency missions.

For agencies beginning their AI journey, this level of detail will be sufficient. Others may benefit from a deeper dive into the opportunities created by specific technologies being deployed in a variety of government settings, separately or in concert. These applications include intelligent robotics, computer vision, natural language processing, speech recognition, machine translation, rules-based systems, and machine learning (figure 3).

Different types of AI can be used for different government problems

The technology perspective, obviously, must go beyond separate innovations to consider how they’ll be used. Any of these technologies can be implemented in different ways, with differing degrees of human involvement:

  • Assisted intelligence: Harnessing the power of big data, the cloud, and data science to aid decision-making.
  • Augmented intelligence: Using machine learning capabilities layered over existing systems to augment human ability.
  • Autonomous intelligence: Digitizing and automating processes to deliver intelligence upon which machines, bots, and systems can act.

A skilled AI implementation team will understand that each technology has different uses, with distinct capabilities, benefits, and weaknesses. The list of possible use cases and available technologies will continue to expand and grow in significance. (See AI-augmented government for a primer on AI technologies and their deployment in government.)

The focus questions for the technology strategy have been answered when the organization knows where to concentrate its investments, in terms of problems and processes, with a level of detail that seems appropriate to the decision-makers. As with all strategies, more detail will be needed as the strategy is translated and further developed within the organization.

Focus: Strategic choice questions to address

Managerial

  1. Who will be the user of AI, and who will benefit from its use?
  2. Which problems will AI address?
  3. How will AI transform our mission or the way in which we pursue it?
  4. In which mission areas or functions will AI be used? And to further which mission?
  5. Which processes and services will AI affect? Back-office processes, customer interfaces, or other activities?

Technical

  1. Which goals should we pursue with AI, and which technology or combination of technologies should we use?
  2. Will the application involve assistedaugmented, or automated intelligence?
  3. How should we prepare for and accommodate future developments in AI and the underlying technologies? (Note that choices may need to be reexamined as capabilities and resources are explored.)

Success: How will AI deployment create value?

“It is likely that the most transformative AI-enabled capabilities will arise from experiments at the “forward edge,” that is, discovered by the users themselves in contexts far removed from centralized offices and laboratories. Taking advantage of this concept of decentralized development and experimentation will require the department to put in place key building blocks and platforms to scale and democratize access to AI. This includes creating a common foundation of shared data, reusable tools, frameworks and standards, and cloud and edge services..”—US Department of Defense AI strategy19

AI-based technologies can create such wide-ranging impacts in so many fields and at so many levels that it can be difficult to imagine how it would change our own work environments. Next to budget constraints, “lack of conceptual understanding about AI (e.g., its proposed value to mission)” was the second-most common barrier to AI implementation cited by respondents in a 2019 NextGov study.20

That’s why an AI strategy should articulate its value both to the enterprise in general, including the workforce, and to those affected by its mission, which includes taxpayers as well as public and private stakeholders, reflecting a clear alignment with broader goals, strategies, and policies. Proactive communication regarding the value that AI creates for an agency and those it serves should help to allay fears of misuse and enhance trust.

Articulating the value of AI

It may help to focus on a few transformative capabilities to start. The most common starting points for AI value creation in government today are typically rapid data analysis, intelligent automation, and predictive analytics.

Data analysis. AI feeds on data as whales consume krill: by the ton, in real time, and in big gulps. AI-based systems can make sense of large amounts of data quickly, which translates into reduced wait times, fewer errors, and faster emergency responses. It can also mine deeper insights, identifying underserved populations and creating better customer experiences.21

Intelligent automation. AI can speed up existing tasks and perform jobs beyond human ability. Robert Cardillo, former director of the National Geospatial-Intelligence Agency, estimated that without AI, his agency would have needed to hire more than 8 million imagery analysts by 2037, simply to keep up with the flow of satellite intelligence.22

Predictive analytics. Police in Durham, North Carolina, have used NLP to spot otherwise-hidden crime trends and correlations in reports and records, allowing for better predictions and quicker interventions. The effort contributed to a 39 percent drop in violent crime in Durham between 2007 and 2014.23

Showing value to the workforce

When explaining how AI creates value, it’s imperative to include the perspectives of the organization and its workforce. The benefits to the organization are relatively easy to articulate—process automation improves speed and quality; analytics identify investment priorities in areas of utmost need; and predictive tools deliver completely new insights and services.

For the workforce, however, the argument is more nuanced. Staff members must be reassured that AI won’t eliminate their jobs. The strategy should show, as applicable, how AI-based tools such as language translation can enhance their existing roles. Freeing employees from mechanical tasks in favor of more creative, problem-solving, people-facing work creates more value for constituents and can greatly enhance job satisfaction.

In an age of electronic warfare, for example, US Army officers are constantly collecting, analyzing, and classifying unknown radio frequency signals, any which may come from allies, malicious, actors or random sources. How can they manage this overload? The Army created a challenge that drew some 150 teams from industry, research organizations and universities. Models offered by winning teams, based on machine learning, gave the Army a head start in developing a solution to sort through the signal chaos.24 The resulting models have the potential to improve sifting through data with much greater speed and accuracy.25

Governments should build a shared vision of an augmented workplace with the public-sector professionals who will be working alongside these technologies, or incorporating them into their own work. For these professionals, as well as citizens and other stakeholders, this vision must address the ethical considerations of AI applications (see sidebar “Managing ethical issues”) and emphasize its value to the organization and its mission.

Managing ethical issues

In view of the risks and uncertainty associated with AI, many governments are developing and implementing regulatory and ethical frameworks for its implementation.26 The US Department of Defense (DoD), for example, is planning to hire an AI ethicist to guide its development and deployment of AI.27 Other methods being used to meet this challenge include:

Creating privacy and ethics frameworks. Many governments are formalizing their own approach to these risks; the United Kingdom has published an ethics framework to clarify how public entities should treat their data.28

Developing AI toolkits. The toolkit is a collection of tools, guidelines, and principles that helps AI developers consider ethical implications as they develop algorithms for governments. Dubai’s toolkit includes a self-assessment tool that evaluates its AI systems against the city’s ethics standards.29

Mitigating risk and bias. Risk and bias can be diminished by encouraging diversity and inclusion in design teams. Agencies also should train developers, data scientists, and data architects on the importance of ethics relating to AI applications.30 To reduce historical biases in data, it’s important to use training datasets that are diverse in terms of race, gender, ethnicity, and nationality.31 Tools for detecting and correcting bias are evolving rapidly.

Guaranteeing transparency. Agencies should emphasize the creation of algorithms that can enhance transparency and increase trust in those affected. According to a program note by the Defense Advanced Research Projects Agency, “Explainable AI … will be essential if future warfighters are to understand, appropriately trust, and effectively manage an emerging generation of artificially intelligent machine partners.” 32

Scaling as a strategy

After identifying goals for AI, the next step is deciding how to scale it across the organization. The architects of the scaling strategy should always keep in mind the relatively low historical success rate for many large-scale government technology projects.33

Moving AI from pilot to production isn’t a simple matter of installation. It will involve new challenges, both technical and managerial, involving the ongoing cleaning and maintenance of data; integration with a range of systems, platforms, and processes; providing employees with training and experience; and, perhaps, changing the entire operating model of some parts of the organization. Our advice: Break the process into steps or pieces, each clearly articulated to increase the likelihood of success. Test with pilot programs before scaling. And don’t neglect the “capabilities” element of strategy, which, when coupled with an accurate assessment of today’s AI readiness, can identify gaps in current technical and talent resources.

The Central Intelligence Agency’s former director of digital futures, Teresa Smetzer, urged her agency to “start small with incubation, do proofs of concept, evaluate multiple technologies [and] multiple approaches. Learn from that, and then expand on that.”34

Define measures of AI performance and value

The architects must also decide how to measure the value of proposed AI solutions. And if a potential value has been identified, how will it be tracked? On deployment, for example, or usage? The ability to accurately identify value—and plan deployment, sets of metrics, and expectations—will depend, in part, on the maturity and complexity of the AI technology and application. It will also be important to explicitly define performance standards in terms of accuracy, explainability, transparency, and bias.

Success: Strategic choice questions

Managerial

  1. What specific value will be created by applying AI? Are we improving a current process or allowing the organization to do something new? How will it be measured?
  2. How do we define success with regard to our workforce? What goals do we set or what actions do we take to increase AI’s value to our employees?
  3. How will we define and demonstrate success in deployment?
  4. How can we ensure value is achieved through the ethical usage of AI?

Technical

  1. How mature or complex are the types of AI we might use? What are the appropriate performance expectations for this type of AI?
  2. What applications and objectives should a successful pilot have? How do we define success for effective scaling? What timing should be used to implement AI? Which metrics should be used?

Capabilities: What do we need to execute our AI strategy?

“ The intelligence community must develop a more technologically sophisticated and enterprise aware workforce. We must:

  • Invest in programs for training and retooling the existing workforce in skills essential to working in an AI-augmented environment.
  • Redefine recruitment, compensation, and retention strategies to attract talent with high demand skills.
  • Develop and continually expand partnership programs with industry, including internship and externship programs, to increase the number of cleared individuals with relevant skills both in and out of government.”

—US Office of the Director of National Intelligence35

An agency can have the right roadmap, technology, and funding for an AI program and still fail at execution. Different capabilities can help ensure success.

Bridging the AI/data science skills gap

One of the most fundamental questions government leaders must consider is, “Who can build an AI system?” In a Deloitte survey, about 70 percent of public-sector respondents identified a skills gap in meeting the needs of AI projects.36 While the private sector can often attract the right data science talent with competitive compensation, the public sector faces the constraint of relatively limited salary ranges.37

Of course, government work has its own attractions. Many professionals, for example, say they want work that is meaningful and improves the world around them.38 Agency recruiters should emphasize the vital problems they’re fighting and the opportunities to serve society that they’re pursuing through the promise of AI.39

But the constant talent challenge needn’t derail an AI strategy. When recruiting falls short, agencies can acquire expertise in other ways, such as:

  • Upskilling and reskilling. Using internal trainers or external partners (such as colleges and universities), agencies can establish training programs to grow thriving AI communities in government.40 For example, the DoD has started offering an AI career path, using a range of digital learning platforms operated by schools and businesses to help personnel keep pace with AI developments in the private sector and give them the knowledge they need to adapt to new roles. These programs are made available across levels and roles, from junior personnel to AI engineers to senior leaders, and combine digital content with tailored instruction from leading experts.41
  • AI “guilds.” The UK Government Digital Service (GDS) has created a data science community across multiple agencies. The community organizes conferences and frequent knowledge-sharing events that showcase different approaches agencies are taking toward data science. GDS also runs a data science accelerator program that offers participants the chance to build their skills on an actual government project with guidance from experienced mentors.42
  • Competitions and prizes. Government entities also host competitions and partnerships to entice innovative solutions from outside. In March 2019, the Centers for Medicare & Medicaid Services launched a contest, with a US$1.65 million prize for the development of an AI tool that could predict patient health care outcomes.43
  • Partnerships. Smaller agencies with limited resources are exploring partnerships with educational institutions and businesses. In conjunction with IBM, for example, the city of Abu Dhabi launched AI training workshops for local government employees. The program was designed to promote a better understanding of AI and its benefits while also improving decision-making skills.44

Agencies should consider whether they will develop AI talent capabilities internally, hire contractors, or use both. On the technical side, agencies must decide whether AI software can be developed in-house, purchased off-the-shelf, or commissioned as custom code. These issues require careful consideration from agency leaders and will vary depending on the agency, its level of ambition, and the maturity of the technology in question.

Building the right foundation

Just as human capabilities are essential to a winning AI strategy, so are the appropriate architectures, infrastructures, data integration, and interoperability abilities. Agencies should test and modify infrastructure before rolling out solutions, of course, but also determine whether their existing data center can manage the expected AI workload. Often the answer is “yes” for a simple proof of concept, but “no” for a production solution. This also raises the issue of how data and cloud strategy will come into play. For example:

  • Data: Since AI quality depends on its ability to access high-quality data, data strategy and governance are all-important. The US Department of Health and Human Services’ data strategy includes the consolidation of data repositories into a shareable environment accessible to all authorized users.45 For robust data governance, agencies must clearly define business and system owners and stewards, with clear roles and accountability, as well as guidelines for security, privacy, and compliance issues.
  • Cloud: The DoD is preparing a foundation for AI with a US$10 billion contract that aims to move its computing systems into the cloud.46 The Pentagon says that this will help it inject artificial intelligence into its data analysis, reaping benefits such as providing soldiers with real-time data during missions.47

Cloud and data strategies are essentially universal issues for AI deployment. Others, such as necessary organizational changes or the need to modernize individual systems, will be specific to each agency and the ambition, applications, and value it pursues.

Capabilities: Strategic choice questions

Managerial

  1. What are the skills needed to implement AI applications? What will scaling require? Is the workforce adequately trained? If not, how can we recruit talent with the necessary knowledge and expertise?
  2. What will the workforce need to accept AI implementation and training?
  3. How can we recruit AI talent? What advantages could be achieved by partnering with other entities?
  4. What academic and industry partners can we use?
  5. What organizational or cultural changes will be necessary?

Technical

  1. What specific tools and platforms can be used for this AI solution?
  2. What data governance and modern data architectures will it need?
  3. What technology and data issues must we face?
  4. What other process changes will be required?

Creating the management systems

“ The DoD will identify and implement new organizational approaches, establish key AI building blocks and standards, develop and attract AI talent, and introduce new operational models that will enable DoD to take advantage of AI systematically at enterprise scale.”—US Department of Defense48

Some leaders breathe a sigh of relief once their roadmap has the necessary AI technologies, processes, capabilities, and funding—but that’s often part of the reason it fails. Not enough attention is typically focused on successfully developing management systems to validate specific initiatives (including cost/benefit analyses), evangelize the project to stakeholders and employees, scale projects from pilot to implementation, and track performance.

Good change management and governance protocols are important. AI is likely to be disruptive to organizations and governments both due to its novelty and its potential complexity. Reviewing all these protocols is beyond the scope of this paper, although we’ve already alluded to the importance of building a compelling story and value proposition for the workforce and other stakeholders. For governance, we touched upon the need to establish and track performance measures—and to revisit them regularly.

This section focuses on some of the emerging structures and systems needed to bring AI strategies to fruition.

Establish an AI operating model

One common best practice for a complex technology project is to establish a center of excellence (CoE) or “hub.” Centers of excellence are communities of specialists built around a topic or technology that develop best practices, build use-case solutions, provide training, and share resources and knowledge.

CoEs draw technology and business stakeholders together for a common purpose, and they in effect partner to identify and prioritize use cases, develop solutions, create innovations, and share knowledge. For example, DoD chief date officer Michael Conlin established the Joint Artificial Intelligence Center (JAIC) with the overarching goal of accelerating the delivery of AI­enabled capabilities, scaling AI departmentwide, and synchronizing DoD AI activities to expand Joint Force advantages.49 Similarly, the Department of Veterans Affairs’ first director of artificial intelligence, Dr. Gil Alterovitz, is leading an effort to connect the department with AI experts in academia and industry.50

Develop governance frameworks

JAIC, again, is more than a simple CoE; it’s a critical element of AI governance. Governance challenges must be addressed if change management is to be effective as AI is adopted across mission areas and back-office operations. And since data-sharing is a vital element in achieving AI’s full impact, explicit governance models must address when and how data will be shared, and how they will be protected. But these challenges might be typical of any digital project; AI brings its own unique governance challenges.

Government agencies increasingly use algorithms to make decisions—to assess the risk of crime, allocate energy resources, choose the right jobs for the unemployed, and determine whether a person is eligible for benefits.51 To instill confidence and trust in AI-based systems, well-defined governance structures must explain how algorithms work and tackle issues of bias and discrimination.52

Several nations are establishing coordinating agencies and working groups to govern AI. Among these is the United Kingdom’s Centre for Data Ethics and Innovation, which will advise the government on how data-driven technologies such as AI should be governed, help regulators support responsible innovation, and build a trustworthy system of governance.53 Singapore has published an AI governance framework to help organizations align internal structures and policies in a way that encourages the development of AI in a fair, transparent, and explainable manner.54

Establish AI deployment structures

Another challenge is thinking through how to advance the many pieces of a project from design through execution. One common trap is “pilot purgatory,” in which projects fail to scale. The reasons for this are varied, but often fall into one of these categories:

  • Pilots are designed narrowly, and thus more easily achieved, but do not have much of an impact on a wider audience. Impact is what generates buy-in.
  • A pilot generates limited returns, despite considerable expenditure of financial and human capital. Stakeholders become reluctant to move it to implementation.
  • Scaling AI pilots requires adapting new technologies and different ways of working, which some workers inevitably resist.

One way to avoid such potholes is to borrow from the studies of behavioral scientists and build quick wins into the process. At the start of your AI journey, focus resources on quick payoffs rather than more lengthy transformative projects. They can be relatively easy to execute and generate high mission value. Further, before launching pilots, agencies should prioritize business issues and subject potential AI solutions to a cost-benefit analysis. Then pilots can be launched to see if the hypothetical benefits can be achieved.

Establish structures and systems for scaling

For public agencies, processes that integrate AI within existing computational infrastructures remain a challenge.55 While solutions vary, four areas often require significant new systems, structures, or leadership. An AI strategy should be sure to carefully consider whether changes will be required in:

  • Technical infrastructure. AI is not part of most agencies’ IT stack, but it should be. To deploy AI applications, agencies require high-bandwidth, low-latency, and flexible architectures. And since data is the food that feeds the AI beast, special care is needed to ensure existing and forthcoming data is cleansed to remove inaccurate, incomplete, improperly formatted, and duplicated data.56
  • Organization and team structure. AI can be one of the most effective antidotes to the siloed organization. To generate the most benefit, AI systems rewrite processes and integrate data from across the organization. AI implementation should involve cross-functional teams from the start. Representation from many teams empowers workers to make decisions based on fast-flowing data.
  • Talent management. As noted above, AI-driven organizations need to bring in new capabilities but also need to train current workers. New retention strategies will be needed, compensation will have to be redefined, and partnerships will need to be developed.57
  • Culture. Existing culture often hinders efforts to infuse AI into the organization. Agencies should inculcate a data-driven culture by incentivizing data-based experimentation, appointing informed change agents, and appealing to influential leaders to become sponsors of AI initiatives in the organization.58

Ensure data quality for AI applications

Data is often stored in a variety of formats, in multiple data centers and in duplicate copies. If federal information isn’t current, complete, consistent, and accurate, AI might make erroneous or biased decisions. All agencies should ensure their data is of high quality and that their AI systems have been trained, tested, and refined.59

JAIC is partnering with the National Security Agency, the US Cyber Command, and the DoD’s cybersecurity vendors to streamline and standardize data collection as a foundation for AI-based cyber tools. This standardized data can be used to create algorithms that can detect cyberattacks, map networks, and monitor user activity.60

Management systems: Strategic choice questions

Managerial

  1. What’s our operating model? Who will be responsible for AI? How should we track performance measures and indicators of AI’s impact?
  2. What communication strategies should we use to gain the trust of employees, external partners, media, and the public?
  3. What change management skills do we need?

Technical

  1. How can we manage AI risks?
  2. How can we manage the piloting and scaling of AI across different departments?
  3. How should AI resources be developed, accessed, housed, allocated, and managed?

Conclusion

As with the introduction of electricity at the beginning of the 20th century and the internet more recently, AI can fundamentally change how we live and work. As key developers and users of AI-based systems, government agencies have a special responsibility to consider not only how it can be used to make work more productive and innovative, but also to think about its potential effects, positive and negative, on society at large.

Agencies should begin the AI journey with a strategy that entwines technology capabilities with mission objectives; creates a clear methodology for decision-making; sets guardrails and objectives for implementation; and emphasizes transparency, accountability, and collaboration. Government leaders are already acting on AI investments. A clear, holistic strategy will help them ensure their AI programs are more likely to achieve their potential for the mission, the workforce, and citizens today and tomorrow.

Close up of someone talking on a smart phone with binary code and electronic numbers composited on the image, and a watching eye.

Why AI Is The Future Of Cybersecurity – Enterprise Irregulars

These and many other insights are from Capgemini’s Reinventing Cybersecurity with Artificial Intelligence Report published this week. You can download the report here (28 pp., PDF, free, no opt-in). Capgemini Research Institute surveyed 850 senior executives from seven industries, including consumer products, retail, banking, insurance, automotive, utilities, and telecom. 20% of the executive respondents are CIOs, and 10% are CISOs. Enterprises headquartered in France, Germany, the UK, the US, Australia, the Netherlands, India, Italy, Spain, and Sweden are included in the report. Please see page 21 of the report for a description of the methodology.

Capgemini found that as digital businesses grow, their risk of cyberattacks exponentially increases. 21% said their organization experienced a cybersecurity breach leading to unauthorized access in 2018. Enterprises are paying a heavy price for cybersecurity breaches: 20% report losses of more than $50 million. Centrify’s most recent survey, Privileged Access Management in the Modern Threatscape, found that 74% of all breaches involved access to a privileged account. Privileged access credentials are hackers’ most popular technique for initiating a breach to exfiltrate valuable data from enterprise systems and sell it on the Dark Web.

Key insights include the following:

  • 69% of enterprises believe AI will be necessary to respond to cyberattacks. The majority of telecom companies (80%) say they are counting on AI to help identify threats and thwart attacks. Capgemini found the telecom industry has the highest reported incidence of losses exceeding $50M, making AI a priority for thwarting costly breaches in that industry. It’s understandable by Consumer Products (78%), and Banking (75%) are 2nd and 3rd given each of these industry’s growing reliance on digitally-based business models. U.S.-based enterprises are placing the highest priority on AI-based cybersecurity applications and platforms, 15% higher than the global average when measured on a country basis.

  • 73% of enterprises are testing use cases for AI for cybersecurity across their organizations today with network security leading all categories. Endpoint security the 3rd-highest priority for investing in AI-based cybersecurity solutions given the proliferation of endpoint devices, which are expected to increase to over 25B by 2021. Internet of Things (IoT) and Industrial Internet of Things (IIoT) sensors and systems they enable are exponentially increasing the number of endpoints and threat surfaces an enterprise needs to protect. The old “trust but verify” approach to enterprise security can’t keep up with the pace and scale of threatscape growth today. Identities are the new security perimeter, and they require a Zero Trust Security framework to be secure. Be sure to follow Chase Cunningham of Forrester, Principal Analyst, and the leading authority on Zero Trust Security to keep current on this rapidly changing area. You can find his blog here.

  • 51% of executives are making extensive AI for cyber threat detection, outpacing prediction, and response by a wide margin. Enterprise executives are concentrating their budgets and time on detecting cyber threats using AI above predicting and responding. As enterprises mature in their use and adoption of AI as part of their cybersecurity efforts, prediction and response will correspondingly increase. “AI tools are also getting better at drawing on data sets of wildly different types, allowing the “bigger picture” to be put together from, say, static configuration data, historic local logs, global threat landscapes, and contemporaneous event streams,” said Nicko van Someren, Chief Technology Officer at Absolute Software.

  • 64% say that AI lowers the cost to detect and respond to breaches and reduces the overall time taken to detect threats and breaches up to 12%. The reduction in cost for a majority of enterprises ranges from 1% – 15% (with an average of 12%). With AI, the overall time taken to detect threats and breaches is reduced by up to 12%. Dwell time – the amount of time threat actors remain undetected – drops by 11% with the use of AI. This time reduction is achieved by continuously scanning for known or unknown anomalies that show threat patterns. PetSmart, a US-based specialty retailer, was able to save up to $12M by using AI in fraud detection from Kount. By partnering with Kount, PetSmart was able to implement an AI/Machine Learning technology that aggregates millions of transactions and their outcomes. The technology determines the legitimacy of each transaction by comparing it against all other transactions received. As fraudulent orders were identified, they were canceled, saving the company money and avoiding damage to the brand. The top 9 ways Artificial Intelligence prevents fraud provides insights into how Kount’s approach to unsupervised and supervised machine learning stops fraud.

  • Fraud detection, malware detection, intrusion detection, scoring risk in a network, and user/machine behavioral analysis are the five highest AI use cases for improving cybersecurity. Capgemini analyzed 20 use cases across information technology (IT), operational technology (OT) and the Internet of Things (IoT) and ranked them according to their implementation complexity and resultant benefits (in terms of time reduction). Based on their analysis, we recommend a shortlist of five high-potential use cases that have low complexity and high benefits. 54% of enterprises have already implemented five high impact cases. The following graphic compares the recommended use cases by the level of benefit and relative complexity.

  • 56% of senior execs say their cybersecurity analysts are overwhelmed and close to a quarter (23%) are not able to successfully investigate all identified incidents. Capgemini found that hacking organizations are successfully using algorithms to send ‘spear phishing’ tweets (personalized tweets sent to targeted users to trick them into sharing sensitive information). AI can send the tweets six times faster than a human and with twice the success. “It’s no surprise that Capgemini’s data shows that security analysts are overwhelmed. The cybersecurity skills shortage has been growing for some time, and so have the number and complexity of attacks; using machine learning to augment the few available skilled people can help ease this. What’s exciting about the state of the industry right now is that recent advances in Machine Learning methods are poised to make their way into deployable products,” said Nicko van Someren, Chief Technology Officer at Absolute Software.

Conclusion

AI and machine learning are redefining every aspect of cybersecurity today. From improving organizations’ ability to anticipate and thwart breaches, protecting the proliferating number of threat surfaces with Zero Trust Security frameworks to making passwords obsolete, AI and machine learning are essential to securing the perimeters of any business.  One of the most vulnerable and fastest-growing threat surfaces are mobile phones. The two recent research reports from MobileIronSay Goodbye to Passwords (4 pp., PDF, opt-in) in collaboration with IDG, and Passwordless Authentication: Bridging the Gap Between High-Security and Low-Friction Identity Management (34 pp., PDF, opt-in) by Enterprise Management Associates (EMA) provide fascinating insights into the passwordless future. They reflect and quantify how ready enterprises are to abandon passwords for more proven authentication techniques including biometrics and mobile-centric Zero Trust Security platform.

653cbe9a84f280ced43dd0a358cc1de3

Only 7% of companies have digital-savvy leadership | CIO Dive

Dive Brief:

  • Just 7% of large companies have digitally savvy executive teams, according to research published Wednesday by MIT Sloan Management Review. The authors of the study define digital savviness as “an understanding, developed through experience and education, of the impact that emerging technologies will have on a business’s success over the next decade.”
  • A review of nearly 2,000 companies found that organizations with digitally savvy leaders outperformed peers on revenue growth and valuation by over 48%.
  • CTOs and CIOs top the list of digital experts in the C-suite, with 47% of CTOs and 45% of CIOs having digital know-how. Just 23% of CEOs are considered digitally savvy.

Dive Insight:

Businesses will become irrelevant if decision-makers are unaware of how technology fits into their strategy — a lesson learned by Blockbuster and Kodak. Competition, soon, will outpace laggards through innovation and slimmer operating costs.

Without a digitally savvy top leadership team, businesses will struggle to use digital tools in a strategic way, said Stephanie Woerner, research scientist at the MIT Sloan Center for Information Systems Research and coauthor of the study.

“There’s a real pressure to become future ready,” said Woerner. “These firms need to become ambidextrous, innovating and taking costs out.”

Appointing leadership figures tied closer to innovation to the CEO seat is a frequent move, in the tech realm especially. In February, Amazon tapped AWS CEO Andy Jassey as its future CEO, taking over for Jeff Bezos following his retirement in the third quarter of 2021. Microsoft CEO Satya Nadella, appointed in 2014, nabbed the top leadership spot after serving as EVP of the company’s cloud and enterprise group.

Roberto Torres / CIO Dive, with data from MIT Sloan

Expect the trend of CIOs escalating to the CEO role to continue, said Woerner, as well as the addition of CIOs to the leadership board and partnerships between CIOs and other C-suite leaders to support digital strategies.

A partnership between a CIO and the CFO, for example, means the CIO “has a partner with someone that knows what technology can do for the firm, and rather than acting as a gatekeeper on the finances is actually thinking about ‘How do we use our investment dollars wisely?”‘

The pandemic drove forward the criticality of having leaders with digital skills, a clear business mandate prior to the crisis.

“What the pandemic did was, all of a sudden, smash right in front of you this idea that we’ve got to change the way that we do business,” said Woerner. “A lot of companies had made a start on this, but many of them were really quite surprised by it all, and found that they were having to make significant changes in months rather than years.”

But having a tech-informed perspective of the business isn’t solely a C-suite responsibility. Leaders “should also be thinking about people below and how they’re going to tap them, and what are the exciting things that they can do with those people,” Woerner said.

AI

What is Intelligence Automation?

The development of Artificial Intelligence technology was thought to be rapid, of course. But in the last few years, there has been an exponential increase in the number of platforms, applications, and tools based on machine learning and artificial intelligence technologies.

Scientists and developers continue to design and develop intelligent machines that can mimic reasoning, develop and learn knowledge, and attempt to mimic how humans think.

Of course, it is really difficult to follow technologies that are developing so fast. For this reason, in order to keep up with these technological developments, we tried to gather Top 10 Artificial Intelligence Technology trends for you.

10. Privacy and Policy

The General Data Protection Regulation (GDPR) came into force on 25 May 2018. The GDPR is the toughest privacy and security law in the world. Though it was drafted and passed by the European Union (EU), it imposes obligations onto organizations anywhere, so long as they target or collect data related to people in the EU. These and similar regulations are a must today at a time when too many people entrust their personal data to cloud services and breaches occur on a daily basis. Considering that the firm stance on data privacy and security is more important than ever before, it is possible to mention that such measures are evolving as fast as the pace of technology change. It’s not too hard to understand how important privacy policies and security will become when robots and artificial intelligence are involved. In this context, the important thing is to learn the emerging regulations, to deal with security and privacy technologies means to master these policies that will dominate in the coming years.

9. The Convergence of IoT, Blockchain and AI

In order for artificial intelligence to become a more efficient technology in every field, it is necessary to adopt a structure integrated with other technologies. For example, self-driving cars may not make much sense without the Internet of Things (IoT). But when the two are together, it is literally a technology.

IoT activates and regulates the sensors in the vehicle by collecting real-time data, whereas artificial intelligence models are just what the vehicle needs to move on its own.

Likewise, Blockchain*; security can work closely with AI to troubleshoot scalability issues. It has to work in the near future. Because it is inevitable to secure Big Data collected by IoT and artificial intelligence algorithms.

The Convergence of IoT, Blockchain and AI

 What is Blockchain?*

Blockchain is an ever-growing list of records, called blocks, that are linked and secured using cryptography. It is the technology that powers bitcoin.

8. Facial Recognition Technology

Facial recognition systems are a reliable method of biometric authentication. Facial recognition technology is the process of identifying or verifying the identity of a person using his or her face. Captures analyze and compare patterns according to the person’s facial details, mimics. Due to large investments and R & D studies in this area, the accuracy and legibility of artificial intelligence applications belonging to facial recognition systems have gained a lot of importance in the last few years. According to the NIST (National Institute of Standards and Technology) Report, massive gains inaccuracy has been made in the last 5 years (2013- 2018) and exceed improvements achieved in the 2010-2013 period.

Most of the facial recognition algorithms in 2018 has evolved to outpace the most accurate algorithm as of late 2013. Considering that only 0.2% of the searches in the 26.6 million photo database were incorrect in 2018, this ratio was recorded as 4% in 2014. It’s a 20x improvement over four years. It shows us how important this technology will become in the future.

Facial Recognition Technology 

If we also think that technology giants like GoogleAppleFacebookAmazon, and Microsoft are investing a lot of money in these areas, it can be said that in the near future, with the help of AI, the usage and development of facial recognition technology will increase considerably.

7. Artificial Intelligence (AI) Assistants

Artificial intelligence assistants can simplify, automate, or even take away the jobs of employees in ordinary customer service and sales departments! We already have popular assistants like SiriCortanaAlexa, and Google Assistant. Many companies, especially technology companies, have started to use artificial intelligent assistants to perform basic tasks and develop them for various tasks.

Artificial Intelligence (AI) Assistants

It seems that in the coming years we will continue to see artificial intelligence assistants more in every aspect of our lives. By 2020, ComScore predicts that about half of all searches will be made by voice. Even this data can give us an idea of how much AI Assistant technology will improve in the future.

6.Intelligent Automation in the Workplace

Imagine that you order dinner in a restaurant and that order is served by robots. It may seem like a dream right now, but in the near future, they will become reality. Imagine how this technology can work in workplaces and production facilities!

Intelligent automation takes process automation to a completely different level. Instead of just implementing and processing predefined steps, automation that works with artificial intelligence is becoming more and more self-evolving over time. When performing tasks and workflows, data obtained during processes can be used for process improvement purposes automatically in the future.

What is Intelligent Automation? 

Intelligent automation can also help identify and optimize best practices that enable managers to make more decisions that are more data-driven and therefore accurate. Although the development of this technology seems slower than others, we can say that we will continue to see it in certain areas of the business in the coming years.

5. Artificial Intelligence in the Media and Entertainment Industry

Artificial intelligence (AI) does not seem to have penetrated the media and entertainment industry much, but the fact that last year, the world’s first artificial intelligence anchor presented the news proves to us how much different areas of artificial intelligence have spread.

World’s first AI news anchor 

The economics of gaming and movie production has a high cost. As an example, if we consider that only one in ten films is profitable, it is also a fact that the use of artificial intelligence in these sectors is inevitable. We mentioned about Benjamin, an artificial intelligence who can write screenplays, in the content of Artificial Intelligence Art. It shouldn’t be too hard to see how many examples of artificial intelligence like this could take place in the media and entertainment industry in the coming period.

4. Artificial Intelligence in Cybersecurity Systems

By the day, the complexity and scale of cyberattacks are increasing at a faster rate, and current defensive measures remain inadequate. When we wake up one morning, we can get the news that a virtual store we shop at has been cyber-attacked and that everyone’s credit card information has been stolen.

It is, of course, a sensible approach to hand over the protection of such big and important data against cyber-attacks to a superhuman algorithm.

AI in Cybersecurity Systems

Therefore, artificial intelligence cybersecurity systems will continue to play an important role in managing these attacks in the coming years. Using machine learning, organizations will be able to easily detect such security breaches and ensure that information security authorities take necessary measures in advance.

3. Intelligent Applications

Intelligent applications are enterprise applications with embedded or integrated artificial intelligence technologies to support or replace human-based activities via intelligent automation, data-driven insights, and guided recommendations to improve productivity and decision making.

Today, enterprise application providers incorporate AI technologies into their offerings and at the same time provide artificial intelligence platform capabilities from enterprise resource planning (ERP) to customer relationship management to human resource management and labor productivity applications. Managers should challenge their packaged software providers to outline in their product roadmaps how they are incorporating artificial intelligence technology to add business value in the form of advanced analytics, intelligent processes, and advanced user experiences. It is possible to say that any application produced as a solution to any problem will be equipped with artificial intelligence, and the companies with this perspective will make a difference in the future periods in which the application will increase the efficiency in the field where it is used.

2. Artificial Intelligence Chips

Leading chip manufacturers such as IntelNvidiaAMD, and ARM are aiming to produce AI-powered chips to speed up the operations of applications that run on AI. An artificial intelligence application relies on the processors as it needs significant speed.

Artificial Intelligence Chips | Artificial Intelligence Technology

Because an artificial intelligence model needs considerable speed, the performance of the processors is crucial for the efficient operation of these applications. It’s quite difficult for the CPU systems to complement such applications when the processes involve facial recognition and object identification which require complex mathematical computations in parallel. In the coming period, performance improvements in artificial intelligence chips, NLP (Neurolinguistic Programming), speech recognition, computer vision and so on applications are designed to run at full capacity and are thought to be an excellent solution to this problem.

1. Automated Machine Learning (AutoML)

We can say that this technology, which is thought to revolutionize business intelligence, will be a more talked-about term in the coming months. Complex problems have the goal of solving them without manually training the machines through the typical process of training models. Instead of getting stuck in the process of training machines, business intelligence specialists will be able to focus much more on their core issues.

What is AutoML?

Automated Machine Learning (AutoML) is the process of automating the end-to-end process of applying Machine Learning (ML) to create, develop and deploy predictive models so that any enterprise benefits from data. It mainly focuses on two main issues – data collection and forecasting. All other steps that take place in between can be easily automated while providing a model that is well optimized and ready to make predictions.

Automated Machine Learning (AutoML) | Artificial Intelligence Technology

 

An example is Microsoft’s recent addition of automated machine learning to Power BI, a business intelligence solution. In Power BI, AutoML allows data analysts to use data streams to create machine learning models with a simplified experience using Power BI skills. Much of the data science underlying the creation of machine learning models are automated with Power BI.

Automated Machine Learning (AutoML)

AutoML has protections to ensure that the generated model is of quality and provides visibility of the process used to build your machine learning model. As a result, we can also say that technology companies have given the first signs that they will invest more in the AutoML field in the coming period.

Mar21_05_2104620

Data Is Great — But It’s Not a Replacement for Talking to Customers

The ability to gather and process intimate, granular detail on a mass scale promises to uncover unimaginable relationships within a market. But does “detail” actually equate to “insight”?

Many decision makers clearly believe it does. In Australia, for instance, the big four banks Westpac, National, ANZ and Commonwealth are spending large on churning through mountains of customer data that relate one set of variables — gender, age, and occupation, for instance — to a range of banking products and services. Australia’s largest bank, the Commonwealth, has announced its big data push.

Like the big banks, Australia’s two largest supermarket chains, Woolworths and Coles, are scouring customer data and applying the massive computer power now available, and needed, with statistical techniques in the search for “insights.” This could involve the combination of web browsing activity, with social media use, with purchasing patterns and so on — complex analysis across diverse platforms.

While applying correlation and regression analysis (among other tools) to truckloads of data has its place, I have a real concern that — once again — CEOs and senior executives will retreat to their suites satisfied that the IT department will now do all the heavy lifting when it comes to listening to the customer.

Data’s Deceptive Appeal
To peek into the deceptive appeal of numbers, let’s review how one business hid behind its data for years.

Keith is the CEO of a wealth management business focused on high-net-worth individuals. It assists them with their investments by providing products, portfolio solutions, financial planning advice and real estate opportunities.

Like its competitors Keith’s company employed surveys to gather data on how the business was performing. But Keith and his executive team came to realize that dredging through these details was not producing insights that management might use in strategy development.

So, Keith’s team decided on a different path. One that really did involve listening to the customer. They conducted a series of client interviews structured in a way that allowed the customer to do the talking and the company to do the listening. What Keith and his executives discovered really shocked them.

The first was that their data was based on nonsense. This came about because the questions they’d been asking were built on managers’ perceptions of what clients needed to answer. They weren’t constructed on what clients wanted to express. This resulted in data that didn’t reflect clients’ real requirements. The list of priorities obtained via client interviews compared to management’s assumed client priorities coincided a mere 50 percent of the time.

Keith’s business is not alone in this as studies have shown that big data is often “precisely inaccurate.” A study reported by Deloitte found that “more than two-thirds of survey respondents stated that the third-party data about them was only 0 to 50 percent correct as a whole. One-third of respondents perceived the information to be 0 to 25 percent correct.”

In Keith’s case this error was compounded when it came to the rating of these requirements. For example, the company believed that older clients wouldn’t rank “technology” (digital and online tools) as high on their list of requirements. However, in the interviews they discovered that while these older clients weren’t big users of technology themselves, many cared about it a great deal. This was because they had assistants who did use it and because they considered having state-of-the-art technology a prerequisite for an up-to-date business.

What Keith and his team also discovered, to their surprise, was how few interviews it took to gain genuine insight. Keith reports that “we needed around 18 to 20 clients to uncover most of the substantive feedback. We thought we’d need many more.” What Keith has encountered here is saturation; a research term referring to the point when you can stop conducting interviews because you fail to hear anything new.

Listening to the Customer
Engaging with your customers may not be as exciting and new as investing in “big data.” But it does have a solid track record of success. Cast your minds back to a historic time in Toyota’s history.

When Toyota wanted to develop a luxury car for the United States, its team didn’t hunker down in Tokyo to come up with the perfect design. Nor did it sift through data obtained from existing Toyota customers about current Toyota models. Instead, it sent its designers and managers to California to observe and interview the target customer — an American, male, high-income executive — to find out what he wanted in a car. This knowledge, combined with its undoubted engineering excellence, resulted in a completely new direction for Toyota: a luxury export to the United States. You will know it better as the Lexus. Listening to the customer is now embedded in Toyota’s culture.

Listening to the customer is also a fundamental component of Adobe’s culture. The company speaks of a “culture of customer listening” and has produced a useful set of guidelines on how to tune in to customers. Elaine Chao, a Product Manager with the company, has expressed it this way: “Listening is the first step. We try to focus on what customers want to accomplish, not necessarily how they want to accomplish it.”

So, provided your data isn’t “precisely inaccurate” employ modern computer power to examine patterns in your customers’ buying behavior. But understand big data’s limitations. The data is historic and static. It’s historic because it’s about the past. Your customers have most likely moved on from what the data captures. And it’s static because, as with any computer modeling, it can never answer a question that you didn’t think to ask.

Real insights come from seeing the world through someone else’s eyes. You will only ever get that by truly engaging with customers and listening to their stories.