icm2re logo. icm2:re (I Changed My Mind Reviewing Everything) is an  ongoing web column  by Brunella Longo

This column deals with some aspects of change management processes experienced almost in any industry impacted by the digital revolution: how to select, create, gather, manage, interpret, share data and information either because of internal and usually incremental scope - such learning, educational and re-engineering processes - or because of external forces, like mergers and acquisitions, restructuring goals, new regulations or disruptive technologies.

The title - I Changed My Mind Reviewing Everything - is a tribute to authors and scientists from different disciplinary fields that have illuminated my understanding of intentional change and decision making processes during the last thirty years, explaining how we think - or how we think about the way we think. The logo is a bit of a divertissement, from the latin divertere that means turn in separate ways.


Chronological Index | Subject Index

The vicious circles of human and artificial intelligence

One more thing about internal and external sources

How to cite this article?
Longo, Brunella (2020). The vicious circles of human and artificial intelligence. One more thing about internal and external sources icm2re [I Changed my Mind Reviewing Everything ISSN 2059-688X (Online)], 9.8 (August). http://www.icm2re.com/2020-8.html

How to cite this article?
Longo, Brunella (2020). The vicious circles of human and artificial intelligence. One more thing about internal and external sources. icm2re [I Changed my Mind Reviewing Everything ISSN 2059-688X (Print)], 9.8 (August). http://www.icm2re.com/2020-8.html

London, 27 July 2020 - To confirm and expand some reflections about what should be the most convenient mix of internal and external sources of information for the creation of business intelligence during the pandemic, it is interesting to note that before 1960 there was much more production of literature on the subject from a variety of disciplines than in the following thirty years. The decrease was essentially because of a quite stable and systematic organisation of practice. But then, after 1990 and until now, we have seen the quantity of published essays, textbooks and journals on intelligence going up again with a change in scope: policy making has become the main focus of attention for the ranks of senior directors advising on the matter. The field has gone through a process of so called politicization (Eisenfeld 2017).

Lowenthal’s book, firstly published in 1999 and this year again in its 8th edition, has brilliantly documented the shift of the practice from the systematic management of processes in military and corporate settings to the much wider world of a variety of activities offered by lobbyists and contractors of governmental agencies (Lowenthal, 1999).

Has the ICT revolution allowed such expansion of the intelligence domain to happen? Of course it has, in my view. Instrumental to such transformation, artificial intelligence is now expected to produce, together with a sort of democratisation of the field, also more reliable ways to handle the automatic process of creating and diffusing intelligence and make sense of it, not without doubts about the preparedness of both academics and practitioners. (Ågerfalk 2020)

How can we produce knowledge through AI without falling into the Alice in Wonderland hole?

The trap and opportunity of circularity of information

Few days ago I was impressed by the clarity and circularity of the communications made by British Airways in regard to the Covid-19 crisis and in particular by their Chairman’s open letter Preparing for a different future.

With the word circularity I mean here mechanisms for the creation and exploitation of public opinion that are fundamental in the use of influence: authoritative stances by experts, scientists and industry veterans as well as trustworthy and catching opinions by celebrities, politicians and other public personalities determine the overall contents and sentiments of a flow of data.

The pressure of binding communications on people choices is immense and unregulated at present. The flow of data spreads through word of mouth on social media and blogs, filtrates into academic sources of information and ends up into media headlines, trade literature and insight reports for the board of directors. The process is at the same time precious for the efficacy and efficiency of government and corporate communications and very dangerous for the discovery, creation and production of new knowledge, raising doubts on who influences whom and with what reliable evidence.

The main reason of concern is that such alignment between internal and external sources does not create any intelligence at all but only multiplies opportunities for inferences, best guesses, inductive reasoning that would be considered noise if it was not for the fact that it contains augmented influence, praises for the organisation or for its products - and there is a practical need for any intelligence system to weight also the noise as a source of information in itself.

Mentions, connections, social media followers, likes and so on create realities that sudden sound familiar or shape the possible route for the acquisition of new data, or pull the serendipity curtain on that sort of eternal truth that just wait to be discovered!

Metadata are useful to engineer campaigns for commercial or social purposes as well as to scaffold sophisticated misinformation operations. But it not always a faulty indexing or tagging triggering deception.

In fact, the web is full of technical bits that rely on our human trust of genuine, non malignant purposes of communications while using very weak tools: API is perhaps the most important of these technologies as of today but other mechanisms are constantly honed and developed without awareness or without care of the multiple contexts in which they could be exploited at some point in time and space.

Deception is not easily seen, not even by those who are embedding logical controls. Software developers try new ways to validate verify or confirm data, within a certain information system but it is always a catch 22 affair: fake videos, for instance, or fake documents and sometimes even simple indexed homonyms shared in one context make unintentional deception tricks in another.

Technical objects out of human control construct perfect zones of influence that invade at the same time multiple channels of communications.

Embedding controlled, sifted external information into internal databases or datasets may be the way forward for machine learning developments, a sort of “track and trace” methodology that gives hope of quality. But it is not to be excluded that even the opposite can work nicely - benchmarking open sourced intelligence against super-vetted authoritative data.

The invention of new assurance mechanisms for artificial intelligence should realistically confront the pressure to make things straightforwardly simple, aligned, not contradictory even when cognitive dissonance would be convenient and should show up. For instance, we know that there is a certain convergence out there in the cloud between various fields of practice, from advertising and persuasive technologies to communication strategies, towards binding communication, that sort of internal information virtuous or vicious circle in which personal commitment, social cohesion, moral values determine a coalescence that makes behaviours, choices, acts and speeches highly predictable. But is this right? What are the risks for the construction of intelligence and further more for machine learning developments?

Binding communication (Joule, Girandola, 2007), the paradigm of which is at the intersection of persuasive communication and external commitment (Kiesler, 1971), can be very successful in bringing about individual behavioral change, even in an oppressive and demanding professional context. We found that employees whom we initially asked to freely give their opinions on the internal information circulation became more interested in what happened and more involved in the company.

On a positive note, it seems to me that experts are starting to recognise in some discourses that there may a future in actuarial methods for AI and perhaps the pandemic will accelerate this apparently difficult trend, looking into long term, sustainable competitive advantages and corporate investments.

In any case, people should be taught that shortcuts for intelligence do not actually exist in the digital as well as in the analogical world: clustering data on new phenomena, objects and situations, getting in so called auxiliary, valuable information from external sources, is always hard work.

Like binding communication, many ways to exploit inference algorithms are legitimate and do not deserve demonisation: they work fine to determine quick and dirty, good enough responses to innumerable mostly simple or superficial questions. But they are also extremely porous, open to contaminations that we would not accept easily if we looked into the way in which conclusions are drawn.

Perhaps we are going back to the starting point of deciding, first of all, what do we need to know and where to draw the limits of symbolic representation. In any case it is important to remember that preparing for a different future could always be different.

References

Ågerfalk, Pär J (2020) Artificial intelligence as digital agency, European Journal of Information Systems, 29:1, 1-8.

Eisenfeld, B. (2017) The Intelligence Dilemma: Proximity and Politicization. Analysis of External Influences, Journal of Strategic Security , Vol. 10, No. 2 (Summer 2017), pp. 77-96

Joule, R.V., Girandola, F. (2007) How can people be induced to willingly change their behavior? The path from persuasive communication to binding communication, Social and Personality Psychology Compass, Vol 1(1), Nov, 2007. pp. 493-505.

Lowenthal, M (1999) Intelligence: from secrets to policy. Washington, D.C.: CQ Quarterly.