The recent advice by TechUK, that Councils should establish digital boards, is a welcome call to raise ‘digital’ to Board-level importance. As Rishi Sunak has said:
“Digital doesn’t belong in the basement, it belongs in the boardroom.”
There is also a welcome realisation that data is a focal point: that there should be an insistence “on modular building blocks and open standards to create a common data structure“. The problem is though, that data sharing will still be a challenge:
“A lack of information sharing was also apparent, which hampers collaboration. Some 46% of those surveyed disagreed that local public services shared information effectively. One non-executive councillor from a London borough said local authorities were “deeply conservative in their approach to information processing” and, as a result, were “missing opportunities for more efficient and effective working through digital transformation”.
It also found a split in opinions on sharing public data with the private sector, with only 6% agreeing that public sector data should be shared with the private sector to develop appropriate solutions for local government. While so-called digital enthusiasts, cabinet members and leaders were somewhat positive, 50% of digital sceptics “overwhelmingly feel that data should not be used in this way”.”
Digital boards will help but with increasing digital ambitions such as information hubs, ecosystems, and data-driven innovation, overcoming the data sharing challenge is vital if transformation is not to be baulked at the first hurdle – remembering that this too plagued e-government initiatives in 2005. (Sounds strangely familiar…?)
The Local Digital Declaration states:
“…we will ‘fix our plumbing’ to break our dependence on inflexible and expensive technology that doesn’t join up effectively.”
But the declaration doesn’t elaborate as to how that ‘joining-up’ will be achieved.
Incorvus’ answer is, as it has always been, is that the data must come first. Data is the life blood of digital organisations. If the data doesn’t flow, or is of poor quality, the body corporate gets ‘ill’ – things don’t work as they should. So fixing the ‘plumbing’ is only part of the task. Yes, the shift is away from application-centric, peer-to-peer networks towards open source – but it also has to be toward data-centric architectures. You have to know what the ‘flow’ requirements are before you can ‘specify the plumbing’.
To give the public sector – or indeed anyone – confidence in the security, governance, management and control of shared information (whether in hub or ecosystem), the challenges have to be answered at a very granular level. Data-centricity addresses this through a focus on metadata at cell level so that whatever eventual project or application is intended, the elements of governance, workflow and audit are commonly available to any controlling application or authorised demand. If you will, this is something of an inversion of the new Solid ecosystem announced by Tim Berners Lee this week, intended to ‘inrupt’ the web. The underlying principle is the same: make the key data available to the entitled, not the other way around. It’s about making data available to the ‘plumbing’.
The data-centric approach is the way to answer the data sharing concerns regarding security; shareability; governance and confidence expressed by the public sector. You can only share something securely if the control metadata is appended at the data level itself. Why? Because the data has to flow. And its a controlled flow, sometimes beyond the initial domain within a multi-domain ecosystem, creating further insecurities for DPO’s and those concerned about personal or sensitive data.
This means that before any technology is discussed, the ‘business’ has to sit down, think hard and clearly map out intended metadata structures, hierarchies, terms, vocabularies, relationships and dictionaries – to establish a common data ontology both within the enterprise, but with relevance in the public sector, to higher levels of government.
This is highlighted by a recent announcement from NHS Digital which is to publish new guidance on common data standards. They have already reported to the Lords Select Committee, that “NHS data is not fit for AI!” Julian Huppert, chair of the independent review panel for DeepMind Health states that NHS trusts have many different systems, and some hospitals run hundreds of databases that “don’t talk to each other.” Great diagnosis but the treatment advocated by Huppert, that of a “secure, centrally managed system”, is reminiscent of SPINE and doesn’t go far enough because data is now understood to be “a continuous renewable resource“. This point is underlined by comments from Martin Severs, medical director at NHS Digital, who said:
“Medical data is very chaotic at source. My phone is more powerful than many of the computers in hospitals. There is a lot of focus in the media about the development of algorithms, but very little focus on the preparation of data….”
“All the data the NHS holds is funded by the British taxpayer. Any use of that data should generate benefits back to the taxpayer. While we should open up enough data as possible for specific research and use cases, within those data-sharing agreements there should be a return on investment on that data. There is billions of pounds’ worth of value in this data. We need to encourage innovation and allow failure at low costs, but there needs to be a return on investment of that data back into the NHS.”
The digital citizen has to be a consistent definition understood across government before that return on investment can happen. Ambitious work on AI, blockchain or any other digital transformation projects has to be preceded, not run alongside, the “back-to-basics drive to get data right!” And a core activity of that drive, has to be addressing metadata issues. Assuming organisations manage to achieve that planning stage, they will still be confronted with the difficulty of discovering and harmonising metadata within packaged applications (some large organisations have as many as 1300 of these), which is where they might want to consider the use of Safyr to discover and extract metadata and metadata structures instead of diluting the business benefits with intensive and costly manual intervention. That would be a good starting point for the data sharing challenge they will have to face. Without understanding of metadata; without a granular approach (and solutions in this area are as rare as hen’s teeth) the challenge will remain an uphill struggle – a continuance of the failed 2005 e-government journey.