Sunday, September 10, 2017

Blockchain and the Edge of Disruption - Brexit

In August 2017, the British government released a position paper on future customs arrangements with the EU following Brexit. Among other things, the paper suggested that new technology would address some of the challenges of maintaining trade "as frictionless as possible". In his report, the BBC technology correspondent mentioned number plate recognition, artificial intelligence, and of course blockchain. This week I met with a couple of our blockchain experts at Reply to brief me on what blockchain can and can't do to address this challenge.

First, let's understand the nature of the challenge. When goods cross a customs border, they have to be declared. Some shipments are inspected to check that these declarations are accurate. This process has three objectives.
  • To ensure that the goods don't exceed some import/export quota, and to levy customs duties if necessary
  • To ensure that the goods satisfy applicable standards and regulations - for example food safety
  • To identify contraband or counterfeit goods
Importers should be able to submit customs declarations electronically before the shipment reaches the border. This should enable customs officials (or algorithms working on their behalf) to select shipments for casual or close inspection, thus reducing delays at the border.

Note that these processes already exist for goods entering the European single market. But the potential impact of Brexit is a massive increase in the volume of cross-border shipments, and current systems and procedures are not expected to be able to handle these volumes. Goods will be delayed, with implications not only for cost but also the quality of fresh produce. Just-in-time supply chains will be disrupted. 

The primary contribution of blockchain here is establish a robust and watertight data trail for goods. This means that if goods are properly labelled, the blockchain can deliver a complete history. This doesn't remove the need for customs declarations, but under certain conditions it could reduce the need for inspections at the border. For example, instead of being located at the border, a plain clothes customs inspector might visit retail outlets with a hand-held label reader, verifying the blockchain record associated with the label, with the power to seize goods and instigate prosecutions.

The blockchain can tell you about the provenance of the item identified on the label, but what's to stop someone switching labels, reusing old labels or even cloning labels?

For some goods the stakes are very high. The lost revenue from the smuggling and counterfeiting of cigarettes alone is estimated at €10bn a year. So a Europe-wide system is being implemented to track cigarettes: by May 2019 all tobacco products within the EU are required to be "marked with a unique identifier" and security stamp. So that's just one high-stakes product, with a relatively small number of manufacturers.

For diamonds, the stakes are even higher. The Kimberly Process Certification Scheme (KPCS) was introduced in 2003 to control trade in "conflict diamonds", but there are many flaws in the scheme.
"If a consumer went into almost any jeweller in the UK and asked for the origin of a diamond on display, staff would most be most unlikely to be able confirm which country, let alone the mine, it was sourced from." [Guardian, March 2014]

My colleagues briefed me on some interesting innovations they are working on for specific high-value products. One possibility is to inscribe a unique identifier into the product itself. For example, diamonds can be etched with a laser, expensive shoes can have the identifier embedded in the heel. And with 3D printing, it may be possible to manufacture each item with its own unique identifier.

Another possibility is to create a detailed description of each item. Everledger, which describes itself as a permanent ledger for high-value assets, uses more than 40 features, including colour and clarity, to create a diamond's ID. It is now moving on to other high-value products such as fine wine. In future, such schemes should make it more difficult to pull off the kind of criminal sleight of hand for which Rudy Kurniawan got ten years in prison.

To prevent cloning, you need more than blockchain. Just as numberplate recognition fails if people can use false numberplates, so blockchain labelling fails if people such as Kurniawan can easily reuse or copy the labels. At a wine auction in 2006, he offered eight magnums of 1947 Château Lafleur. This immediately aroused suspicion, because only five magnums were ever produced. If he had sold each bottle separately, would anyone have noticed? Yes perhaps, if every sale had to be recorded in the blockchain.

If criminals have access to such technologies as etching and 3D printing, they may be able to create exact copies of labels and products that would appear valid when checked against the blockchain. So to guard against this, the blockchain has to have sufficient visibility of the supply chain to detect any duplicates.

In other words, to use blockchain properly, it's not enough to maintain a record of the origin of an item. You have to have a complete record of all transactions involving the item, including inspections. This means adding to the blockchain at every link in the supply chain. As the industry body BIFA observes in relation to blockchain generally,
"this technology ... can only reap its full benefits if all stakeholders/members of the supply chain make use of the technology and can access it"

Further difficulties arise where goods are processed. For example, when a large animal or fish is cut up into pieces, to be sold to multiple consumers. Blockchain can be used to check that the total weight of the pieces is consistent with the original weight of the whole, but again this assumes that all the pieces are tracked. However, there is considerable interest in getting this kind of scheme to work effectively for products where sustainability is a major issue, such as tuna.

Where there are transformation points in the supply chain - such as cutting a rough diamond into jewels or cutting a whole tuna into steaks - these can be subject to special monitoring and certification, and this can itself be written into the blockchain for further reassurance. 


In summary, my colleagues have convinced me that there are significant opportunities for blockchain in supporting the supply chain for selected high-value or safety-critical products, provided certain assumptions are met. Blockchain is not necessarily the whole solution, but works when appropriately combined with other innovations.

But even these schemes will take years to get up to speed. We started with the problem of massive increases in the volume of shipments crossing customs borders. In the examples I've discussed here, customs facilitation is not the primary motive for introducing blockchain, but may be an additional benefit. However, it is hard to see a sufficient number of these schemes being operational in time for Brexit, let alone a universal system for all categories of goods.
 



Stephen Adams, Brexit customs questions (Global Counsel, 16 August 2017)

Ian Allison, Blockchain plus 3D printing equals 'smart manufacturing' and Ethereum you can touch (International Business Times, 11 October 2016)

Aleya Begum, UK outlines "fantasy" post-Brexit customs position (GTR Review, 16 August 2017)

British International Freight Association, Blockchain Technology in Logistics (BIFA, Feb 2017)

Rory Cellan-Jones, Can tech solve the Brexit border puzzle? (BBC News, 16 August 2017)

Matthew Lesh, Blockchain offers an innovative solution to the Brexit customs puzzle (Brexit Central, 17 August 2017)

Natasha Lomas, Everledger Is Using Blockchain To Combat Fraud, Starting With Diamonds (TechCrunch 29 Jun 2015)

Paul McClean, EU report backs joint effort to trace illicit cigarettes (FT, 22 December 2016)

Adele Peters, Tracking Tuna On The Blockchain To Prevent Slavery And Overfishing (Fast Company, 8 Sept 2016)

Jeff John Roberts, Big Pharma Turns to Blockchain to Track Meds (Fortune, 21 Sep 2017)

Gian Volpicelli, How the blockchain is helping stop the spread of conflict diamonds (Wired, 15 February 2017)

Wikipedia: Kimberly Process Certification Scheme, Rudy Kurniawan


Related Posts: Blockchain and the Edge of Disruption - Fake News (September 2017)


Updated 22 September 2017

Friday, September 01, 2017

Blockchain and the Edge of Disruption - Fake News

In a world where stability and trust are under threat, blockchain may seem to be a good way of holding the line. In the past month, @omribarzi has written several Forbes articles describing various applications of blockchain technology.

In this post I want to look at his proposal for addressing the problem of fake news, in which he makes the following claims:
  • Blockchain Tech Seeks to Decentralize News
  • Blockchain Tech Can Fix Mainstream Media
  • Blockchain Can Also fix Social Media
  • Giving Control Back to the Users
So let's start with the problem statement.
"The biggest issue with news sources in the digital age is verifiability. ... During the last American election, accusations of bias were everywhere, and the public has grown sick of the lack of clear and unbiased journalism."
 One of the things that blockchain can do is provide a clear lineage for a given item. If someone presents you with a dodgy story and claims that it comes from a reputable source such as the BBC, you can (if you choose) inspect the blockchain to verify this claim.

Or you could just look on the BBC website. It is not clear that blockchain is any easier or more reliable than other methods of fact-checking.

One use case described in the article is that "writers can offer snippets -- concise summaries of news articles". Blockchain may be able to verify that an original news article has a reputable source, but how can Blockchain verify that the summary accurately represents the original news article or articles? Fake news can sometimes contain true snippets taken out of context, and juxtaposed with other material to create a deliberately false impression.

A lot of recent fake news has been exposed by simple fact check. For example, the false assertion that President Obama played golf during Hurricane Katrina is refuted by a simple date check (Obama was not president during Katrina) or by looking at contemporary news reports. Is there a way that Blockchain could establish a link from the snippet to the fact-check?

And the quality of news is not just dependent on identifying the source. The BBC is a reliable source of news for many topics, but in some areas (e.g. climate change, Brexit economics) a dogmatic notion of "balance" results in its giving the same respect to dubious minority opinions as to expert consensus.

Verifiability is ultimately a question of methodology. Where a news story is controversial or politically charged, a good journalist or editor strongly prefers multiple independent sources, and will actively check the most obvious ways in which the story might be refuted (such as reverse image search). How is Blockchain going to help here?

Most of the time, fact-checking is relatively easy if you can be bothered. The reason fake news flourishes is that people can't be bothered. Often they can't even be bothered to read the article or view the video before reposting something, so the "like" is based purely on a seductive headline.

The article describes a platform called Snip, which will establish a reputation economy, and somehow remain immune to armies of bots. Snip means you never have to read long-form journalism (if you don't want to) and it has a machine learning algorithm "that learns you and your preferences so the end result is highly relevant personalize genuine news feed". Sounds pretty much like Facebook. Is that really "giving control back to the users"?



Omri Barzilay, Why Blockchain Is The Future Of The Sharing Economy (Forbes, 14 August 2017)
Omri Barzilay, 3 Ways Blockchain Is Revolutionizing Cybersecurity (Forbes, 21 August 2017)
Omri Barzilay, How Blockchain Is Reinventing Your News Feed (Forbes, 28 August 2017)
Omri Barzilay, Will Blockchain Change The Way We Invest? (Forbes, 30 August 2017)

Alexandra Svokos, Barack Obama Actually Visited Hurricane Katrina Victims, So Haters Get Out (Elite Daily, 31 August 2017)

Related Post: Blockchain and the Edge of Disruption - Brexit (September 2017) 

Wednesday, March 01, 2017

The PowerPoint Collection

A collection of blogposts about PowerPoint.


Corrupting Evidence (Feb 2005) 
Edward Tufte is writing a book called Beautiful Evidence, about the proper and improper use of modern rhetorical media, such as PowerPoint.

What Exactly is PowerPoint For? (March 2006)
Microsoft is making concerted efforts to improve their own use of PowerPoint, and to encourage others to use it better. Bill Gates spoke without slides in his keynote speech at Mix06.

Beyond Bullet Points (May 2006)
Some of my friends at Microsoft are excited about Cliff Atkinson and his "new" presentation style, based on the work of psychology professor Richard E. Mayer.

Who's the Dick in the Wine Bar (May 2006)
If you are accustomed to traditional PowerPoint, beware. You may find these videos disturbing.

PowerPoint Slides (Nov 2006)
It is not Microsoft's fault if the Pentagon makes inappropriate use of the available tools. Loads of stupid documents have been written in Word, and loads of bad accounts produced in Excel. But it is PowerPoint gets most of the criticism.

Blame PowerPoint (Oct 2009)
If different groups or communities use PowerPoint differently, there may be many different PowerPoints-in-use corresponding to a single PowerPoint-as-built.

Visualizing Complexity (April 2010)
Lot of people have been mocking a diagram that attempts to visualize the complexity of the situation in Afghanistan using system dynamics, rendered as a PowerPoint slide. (Many people have chosen to blame PowerPoint for the complexity of this diagram.) See also Understanding Complexity (July 2010)

Visual Cliché in Architectural Discourse (Nov 2010)
The visual language of architectural discourse, from enterprise to software, is surprisingly weak. Many diagrams look as if they may have started as meaningful sentences, but they have been transformed into diagrams by discarding most of the words and putting the remaining words into coloured shapes, arranged artistically on the slide.

Wednesday, April 27, 2016

The Power of Twitter

Let's suppose I want to find an intelligent review of a film.

If I just put the name of the film into Google, I will get endless repetitions of the synopsis, together with details of cinemas showing the film, or places to buy/download.

In my post You don't have to be smart to search here ... but it helps (Nov 2008), I outlined one possible trick. If you put the name of the film together with a random cultural icon (my example was Lacan), you will get reviews of the film that name-drop the icon. That immediately filters out all the standard cinema listings. However, you might need to try a number of different cultural icons until you strike lucky.

A second option is to subscribe to good magazines. When I watched the film Anomalisa, I didn't immediately make the connection with Schopenhauer. The connection was made for me by a fascinating review by Zadie Smith in the New York Review of Books.



Once you know that such a connection exists, you can use Google to find it. But Google won't make that connection for you - unless sufficient numbers of other people have already made that connection.

So here's a third option. Twitter allows you to have a list of intelligent film critics, and intelligent magazines containing intelligent film reviews. Either you decide for yourself what counts as intelligent, or you adopt someone else's list. Then you can search through the list for seriously intelligent reviews of the latest film. You can't do anything quite like this with Google.


When you search for something, Google can give you page after page of practically identical material - for example, hundreds of newspapers all repeating the same press release. What one really wants is a search engine that works out which page represents the original source, which pages represent replications with no added content or value, and which pages offer additional commentary and interpretation. It is possible that Twitter, with its conversational structure, may be closer to providing this kind of navigation. But only if the platform can achieve reasonable commercial viability without being polluted.

The Force of Goole

When people talk about Internet Binging, they aren't talking about using the world's fourth most popular internet search engine. According to @ruskin147's BBC Radio Four documentary The Force of Google this evening, people don't even use the generic phrase "searching the internet". They use the word "Google". I think I heard someone say that the word is now more popular than the word "eggs".

Rory discussed several ways that hard-boiled Google poaches Internet business, while scrambling our brains.

1.  Business is dependent on the caprice of Google ranking. Rory talks to the owner of a fly fishing company, which gets a significant proportion of his business via Google. When Google changed its algorithm in 2013, his webpage dropped from page one to page seven - almost equivalent to a commercial death penalty. Then inexplicably it climbed back again - the death penalty reprieved. Readers with long memories will remember the story of BMW (Feb 2006), which was banished from Google for three days in 2006.
 
2. In trying to be as helpful as possible to searchers, Google sometimes fails to respect the interests of other information providers. For example, if you search for hotels in Bury, you get Google's automatically curated list before you get lists from rival platforms such as TripAdvisor and Yelp. 

3. In the past, there has been some evidence that Google is biased towards controversial new technologies, perhaps because the technology vendors spend more on advertising than the technology sceptics. I have noted this apparent bias in relation to Biometrics (Nov 2003) and RFID (Nov 2005). Google now seems to have made some progress on this issue - Rory looked up "fracking" and got a more even-handed view from Google than from Bing.

4. Even without any obvious commercial or political agenda on Google's part, it is easy to see how Google's results could appear to show a lack of balance. Note for example the recent controversy about Unprofessional Hair. There have also been suggestions that Google page ranking could influence the public perception of politicians and thus sway elections.

5. One of the most dangerous aspects of the Google phenomenon is the widespread illusion that Google gives you Objective Truth. Rory talks to Ben Gomes, who is described as Google's Guru of Search, who talks about the Quest for the Perfect Search.

"The perfect search is giving you what you were looking for. Not just the words you typed - but what you were actually looking for."

The programme gave the impression that Google is converging on the Perfect Search. Rory himself says he generally finds what he is looking for. My own experience is that it sometimes requires a fair amount of ingenuity to find stuff, especially interesting and original stuff. See my posts You don't have to be smart to search here ... but it helps (Nov 2008) and Thinking with the Majority (March 2009). See also The Power of Twitter (April 2016).



Wondering about the deliberate spelling mistake in the title of this post? I wanted to pay tribute to a listing from @brightonargus.
Which reminded me of the original Argus Panoptes, the giant who would be the mythical ancestor of Google. And also the ARGUS-IS system, a secret rival to Google's Street View.  Even Argus may have flawed vision sometimes.

Wikipedia: Argus Panoptes, ARGUS-IS.

Leigh Alexander, Do Google's 'unprofessional hair' results show it is racist? (Guardian 8 April 2016)

Rory Cellan-Jones, Six searches that show the power of Google (BBC 26 April 2016)

Konrad Krawczyk, Google is easily the most popular search engine, but have you heard who’s in second? (Digital Trends, 3 July 2014)

Sunday, April 03, 2016

From Networked BI to Collaborative BI

Back in September 2005, I commented on some material by MicroStrategy identifying Five Types of Business Intelligence. I arranged these five types into a 2x2 matrix, and commented on the fact that the top right quadrant was then empty. 



 
The Cloud BI and analytics vendor Birst has now produced a similar matrix to explain what it is calling Networked BI, placing it in the top right quadrant. Gartner has been talking about Mode 1 (conventional) and Mode 2 (self-service) approaches to BI, so Birst is calling this Mode 3.




While there are some important technological advances and enablers in the Mode 3 quadrant, I also see it as a move towards Collaborative BI, which is about the collective ability of the organization to design experiments, to generate analytical insight, to interpret results, and to mobilize action and improvement. This means not only sharing the data, but also sharing the insight and the actioning of the insight. Thus we are not only driving data and analytics to the edge of the organization, but also developing the collective intelligence of the organization to use data and analytics in an agile yet joined-up way.

I first mentioned Collaborative BI on my blog during 2005, and discussed it further in my article for the CBDI Journal in October 2005. The concept started to gather momentum a few years later, thanks to Gartner, which predicted the development of collaborative decision-making in 2009, as well as some interesting work by Wayne Eckerson. Also around this time, there were some promising developments by a few BI vendors, including arcplan and TIBCO. But internet searches for the concept are dominated by material between 2009 and 2012, and things seem to have gone quiet recently.


Previous posts in this series

Service-Oriented Business Intelligence (September 2005)
From Business Intelligence to Organizational Intelligence (May 2009)
TIBCO Platform for Organizational Intelligence (March 2011)


Other sources

Gartner Reveals Five Business Intelligence Predictions for 2009 and Beyond (Gartner, January 2009). Dave Linthicum, Let's See How Gartner is Doing (ebizQ, May 2009)

Chris Middleton, Business Intelligence: Collaborative Decision-Making (Computer Weekly, July 2009)

Ian Bertram, Collaborative Decision-Making Platforms (Gartner 2011)

Wayne Eckerson, Collaborative Business Intelligence: Optimizing the Process of Making Decisions (April 2012)

Monique Morgan, Collaborative BI: Today and Tomorrow (arcplan, April 2012)
Tiemo Winterkamp, Top 5 Collaborative BI Solution Criteria (arcplan, April 2012)

Cliff Saran, Prepare for two modes of business intelligence, says Gartner (Computer Weekly, March 2015)

The Future of BI is Networked (Birst, March 2016)


Updated 21 April 2016 (image corrected)

Thursday, March 24, 2016

Artificial Belligerence

Back in the last century, when I was a postgraduate student in the Department of Computing and Control at Imperial College, some members of the department were involved in building an interactive exhibit for the Science Museum next door.

As I recall, the exhibit was designed accept free text from members of the public, and would produce semi-intelligent responses, partly based on the users' input.

Anticipating that young visitors might wish to trick the software into repeating rude words, an obscenity filter was programmed into the software. When some of my fellow students managed to hack into the obscenity file, they were taken aback by the sheer quantity and obscurity of the vocabulary that the academic staff (including some innocent-looking female lecturers) were able to blacklist.

The chatbot recently launched onto Twitter and other social media platforms by Microsoft appears to be a more sophisticated version of that exhibit at the Science Museum so many years ago. But without the precautions.

Within 24 hours, following a series of highly offensive tweets, the chatbot (known as Tay) was taken down. Many of the offensive tweets have been deleted.


Before

Matt Burgess, Microsoft's new chatbot wants to hang out with millennials on Twitter (Wired, 23 March 2016)

Hugh Langley, We played 'Would You Rather' with Tay, Microsoft's AI chat bot (TechRadar, 23 March 2016)

Nick Summers, Microsoft's Tay is an AI chat bot with 'zero chill' (Engadget, 23 March 2016)


Just After

Peter Bright, Microsoft terminates its Tay AI chatbot after she turns into a Nazi (Ars Technica

Andrew Griffin, Tay Tweets: Microsoft AI chatbot designed to learn from Twitter ends up endorsing Trump and praising Hitler (Independent, 24 March 2016)

Alex Hern, Microsoft scrambles to limit PR damage over abusive AI bot Tay (Guardian, 24 March 2016)

Elle Hunt, Tay, Microsoft's AI chatbot, gets a crash course in racism from Twitter (Guardian, 24 March 2016)

Jane Wakefield, Microsoft chatbot is taught to swear on Twitter (BBC News, 24 March 2016)


"So Microsoft created a chat bot that so perfectly emulates a teenager that it went off spouting offensive things just for the sake of getting attention? I would say the engineers in Redmond succeeded beyond their wildest expectations, myself." (Ars Praetorian)


What a difference a day makes!


Some Time After

Peter Lee, Learning from Tay's Introduction (Official Microsoft Blog, 25 March 2016)

Sam Shead, Microsoft says it faces 'difficult' challenges in AI design after chat bot Tay turned into a genocidal racist (Business Insider, 26 March 2016)

Paul Mason, The racist hijacking of Microsoft’s chatbot shows how the internet teems with hate (Guardian, 29 March 2016)

Dina Bass, Clippy’s Back: The Future of Microsoft Is Chatbots (Bloomberg, 30 March 2016)

Rajyasree Sen, Microsoft’s chatbot Tay is a mirror to Twitterverse (LiveMint, 31 March 2016)


Brief Reprise

Jon Russell, Microsoft AI bot Tay returns to Twitter, goes on spam tirade, then back to sleep (TechCrunch, 30 March 2016)



Updated 30 March 2016

Saturday, November 28, 2015

Predictive and Real-Time Analytics

I shall be chairing the @UNICOMSeminars Data Analytics conference next week. Exploring the Business Value of Predictive and Real-Time Analytics (London, 2 December 2015)

A lot of the obvious applications of real-time analytics are in fraud detection and predictive maintenance. I shall be talking about some of the things I’ve been doing recently in the retail and consumer sector, using rich consumer data to drive real-time personalized engagement with the consumer across multiple touchpoints. We have been exploring ways to combine real-time analysis of the consumer’s current state (e.g. current location, what products they are currently looking at, readiness to buy, etc.) with a rich understanding of what one might call the consumer’s “purchasing genes” – for example, do they like to spend a long time reviewing alternative products before purchasing, do they like to wait for a special offer or voucher before buying, or on the other hand do they like to be the first in their social network to have a given product. This is a lot more complex than simply putting them into a fixed number of “consumer segments”.

Based on this analysis, it is possible to select an appropriate “next action” – for example, selecting the appropriate banner to display to the consumer when visiting the website, or the right topic of conversation for a human customer services agent.

Thus predictive analytics are helping retail as it moves from omnichannel commerce (which joins up the buying transaction between the online and the physical world) to omnichannel engagement (which joins up all aspect of the relationship with the consumer).

Omnichannel Commerce
(Systems of Record)

joins up the buying transaction between the online and the physical world
Omnichannel Marketing
(Systems of Engagement)

joins up all aspects of the relationship with the consumer

Given the large volumes of data involved, and the reliance on legacy systems to produce and process the data, we are not yet seeing this analysis being completely done in real-time. However, there are some critical factors that have to be done in real-time. For example, as soon as the consumer buys something, our clients want to stop trying to sell it, and move to a post-sales scenario. (In comparison, even the great Google is still showing me advertisements based on what I was browsing three weeks ago. Fail!)

Over the next couple of years, as the technology gets better, the data scientists get even smarter, and the marketing people get more sophisticated, we may expect an increasing proportion of the analysis to be done in real-time, using machine learning as well as more sophisticated analytics tools.