While modern day A.I. or machine intelligence (MI) hype revolves around big ”eureka” moments and broad scale “disruption”, expecting these events to occur regularly is unrealistic. The reality is that working with AI’s on simple routine tasks will drive better decision making systemically as people become comfortable with the questions and strategies that are now possible. And hopefully, it will also build the foundation for more of those eureka(!) moments. Regardless, technological instruments allow workers to process information at a faster rate while increasing the precision of understanding. Enabling more complex and accurate strategies. In short, it’s s now possible to do in hours what it once took weeks to do. Below are a few things I’ve found helpful to think about when driving machine intelligence at a large organization, as well as what is possible.
Algorithm Aversion— humans are more willing to accept the flawed human judgment. However, people are very judgmental if a machine makes a mistake – even within the lowest margin or error. Decisions generated by simple algorithms are often more accurate than those made by experts, even when the experts have access to more information than the formulas use. For further elaboration on making better predictions, the book Superforecasting is a must read.
Silos! The value of keeping your data/information a secret as a competitive edge does not outrun the value of potential innovation or insights if data is liberated within the broader organization. If this is possible build what I call diplomatic back channels where teams or analysts can sure data with each other.
Build a culture of capacity.Managers are willing to spend 42 percent more on the outside competitor’s ideas. Leigh Thompson, a professor of management and organizations at the Kellogg School says. “We bring in outside people to tell us something that we already know,” because it paradoxically means all the wannabe “winners” in the team can avoid losing face”. It’s not a bad thing to seek external help, but if this is how most of your novel work is getting done and where you go to get your ideas you have systemic problems. As a residual, the organization will fail to build strategic and technological muscle. Which is likely to create a culture which emphasizes generalists, not novel technical thinkers in leadership roles. In turn, you end up with an environment where technology is appropriated at legacy processes & thinking – not the other way around (if you want to stay relevant). Avoid the temptation to outsource everything because nothing seems to be going anywhere right away. That 100-page power point deck from your consultant is only going to help in the most superficial of ways if you don’t have the infrastructure to drive the suggested outputs.
Our traditional institutions, leaders, and experts have shown to be incapable of understanding and accounting for the multidimensionality and connectivity of systems and events. The rise of the far-right parties in Europe. The disillusionment of European Parliament elections as evidenced by voter turnout in 2009 and 2014 (despite spending more money than ever), Brexit, and now the election of Donald Trump as president of the United States of America. In short, there is little reason to trust experts without multiple data streams to contextualize and back up their hypotheses.
How could experts get it wrong? Frankly, it’s time to shift out of the conventional ways to make sense of events in the political, market, and business domains. The first variable is reimagining information from a cognitive linguistic standpoint. Probably the most neglected area in business and politics – at least within the mainstream. The basic idea? Words have meaning. Meaning generates beliefs. Beliefs create outcomes, which in turn can be quantified. The explosion of mass media, followed by identity-driven media, social media, and alternative media, is a problem, and we are at the mercy of media systems that frame our reality. If you doubt this, reference the charts below. Google Trends is deadly accurate in illustrating what is on people’s minds the most, bad or good, wins – at least when it comes to U.S. presidential elections. The saying goes bad press is good press is quantified here, as is George Lakoff’s thinking on framing and repetition (Google search trends can be used to easily see which frame is winning, BTW ).
Google search trends of Democrat and Republican presidential candidates going back from 2004 to 2016. The candidate with the highest search volume won in all political races.Social media and news mentions of key 2016 presidential candidates per hour within the query used “Donald Trump” OR “Hillary Clinton” OR hashtags #ElectionDay OR #Election2016.Google Trends wasn’t caught off guard by François Fillon’s win over Nicolas Sarkozy and Alain Juppé in the French Republican primaries. It was closer than the polls expected all along.
Within this system, there is little reason to challenge one’s beliefs, and almost nothing forces anyone to question their own. Institutions and old media systems used to be able to bottleneck this; they were the only ones with a soapbox, and information was reasonably slow enough. There is a need for unorthodox data, such as open-source intelligence (OSINT) and creativity, which traditional systems of measurement and strategy lack to outthink current information systems. To a fault, businesses, markets, and people strive for simple, linear, and binary solutions or answers. Unfortunately, complex systems, i.e., the world we live in, don’t dashboard into nice simple charts like the one below. The root causes of issues are ignored, untested, and contextualized, which creates only a superficial understanding of what affects business initiatives.
A nice linear BI chart is above. Unfortunately, the world is much more complex and connected – like the network graph below, which clusters together new media that covered Hillary Clinton and Donald Trump.
I know this may feel like a reach in terms of how all that is mentioned is connected, so more on OSINT, data, framing, information, outcomes, and markets to come.
Technology has increased access to information, making the world more similar on a macro and sub-macro level. However, despite the increased similarity, research shows business models are rarely horizontal, emphasizing the importance of micro-level strategic consideration. Companies routinely enter new markets relying on knowledge of how their industry works and the competencies that led to success in their home markets while not being cognizant of granular details that can make the difference between success and failure in a new market. Only through machine-driven intelligence can companies address the level of detail needed in a scalable and fast manner to remain competitive.
Research from Harvard Business Review shows that despite more access to information and networks through globalization, business models, 86% of the time, do not work from one country to the next.Relying on simple explanations for complex phenomena is risky—3% of markets have a negative correlation to one another. Therefore, organizations need to contextualize marketplace characteristics and avoid simplistic linear or binary-based KPIs.The world is complex, but computational power is getting more robust and cheaper while machines are getting smarter. Despite the fact that only 11% of country-to-country profitability had a positive relationship, asset-rich large organizations with large networks and infrastructure are uniquely positioned to exploit this if they learn to understand their assets in higher resolution. In most cases, this means letting go of long-held beliefs and how things used to be done, in addition to being agnostic about how to make money.The number of shares traded on the NYSE is shown as an analog to illustrate how quickly information grows exponentially yearly. Despite advancements in communication technology, companies are no better at market access.
Furthermore, machine intelligence and information have led to the rapidly diminishing value of expertise and eroding the value of information. The expertise needed to outrun or beat machine intelligence has exponentially increased yearly. Over the next one to two years, the most successful companies will accept the burden of proof that they have switched from technologies and AI to human expertise. Furthermore, machines will come to reframe what business and strategy mean. Business expertise in the future will be the ability to synthesize and explore data sets and create options using augmented intelligence – not being an expert on a subject per se. The game changers will have the fastest “information to action” at scale.
A residual of that characteristic makes a “good” or “ok” decision’s value exponentially highest at the beginning – and oftentimes much more valuable than a perfect decision. To address this trend, organizations must focus on developing processes and internal communication that foster faster “information-to-action” opportunity cost transaction times, similar to how traders look at financial markets. Those margins of competitive edge will continue to shrink but will become exponentially more valuable.
Why businesses harnessing AI and other technologies are leading the way
Studies show experts consistently fail at forecasting and traditionally perform worse than random guessing in businesses as diverse as medicine, real estate valuation, and political elections. This is because people traditionally weigh experiences and information in very biased ways. In the knowledge economy, this is detrimental to strategy and business decisions.
Working with machines enables businesses to learn and quantify connections and influence in ways humans cannot. An issue is rarely isolated to the confines of a specific domain, and part of Walmart’s analytics strategy is to focus on key variables in the context of other variables that are connected. This can be done in extremely high resolution by taking a machine-based approach to mining disparate data sets, ultimately allowing flexibility and higher-resolution KPIs to make business decisions.
The effects of digital disintermediation and the sharing economy on productivity growth
Machines have increased humans’ ability to synthesize multiple information streams simultaneously, as well as our ability to communicate these insights, which should lead to a higher utility on assets. In the future, businesses will likely have to be more focused on opportunity cost and re-imagine asset allocation with increased competition due to lower barriers to entry. Inherently, intelligence and insights are about decisions. A residual of that characteristic makes a good or ok decision’s value exponentially highest in the beginning – and oftentimes more valuable than a perfect decision. To address this trend, organizations must focus on developing processes and internal communication that foster faster “information-to-action” transaction times, much like how traders look at financial markets.
Is this the beginning of the end?
Frameworks driven by machines will allow humans to focus on more meaningful and creative strategies that cut through the noise to find what variables can be controlled, mitigating superficial processes and problems. As a result, it is the end for people and companies that rely on information and routine for work. And the beginning for those who can solve abstract problems with creative and unorthodox thinking within tight margins. Those who do so will also be able to scale those skills globally with advancements in communication technology and the sharing economy, which will considerably speed up liquidity on hard and knowledge-based assets.
Recently I’ve been thinking of ways to detect bias as well as look into what makes people share. Yes understanding dynamics and trends over time, like the chart below (Topsy is a great simple, free easy tool to get basic Twitter trends), can be helpful – especially with linear forecasting. None the less they reach their limits when we want to look for deeper meaning – say at the cognitive or “information flow” level.
Enter networks. This enables an understanding of how things connect to and exist at a level that is not possible to do with standard KPIs like volume, publish count and sentiment over time. Through mapping out the network based on entities, extracted locations, and similar text and language characteristics it’s possible to map coordinates of how a headline, entities or article exists and connects to other entities within the specific domain. In turn, this creates an analog of the physical world with stunning accuracy – since more information is reported online every day. For example, using to online news articles and Bit.ly link data, I found articles with less centrality (based on the linguistic similarity of the aggregated on-topic news article) to their domain, which denote variables being left out (of the article), typically got shared the most on social channels. In short, articles that were narrower in focus, and therefore less representative of the broader domain, tended to be shared… This is just the tip of the iceberg.