The Department of Internal Affairs (DIA) are quietly amassing power over the digital lives of Kiwis. Now the DIA is consulting on a new regulatory system for media and online platforms. The New Zealand Free Speech Union, unusually, broke an embargo to communicate the DIAs alarming proposal for a powerful online content regulator.
We can view this with a global lens and recognise other forces concurrently at play that add up to power-shifts. This is not just so-called democratic nations pushing to mysteriously increase powers over online content. It can include consolidation of power, such as the DIA’s brand new power as administrator of NZs digital ecosystem (discussed below). It might include the potential for programmable digital currency to be toggled to behavioural permissions (which ultimately could be based on data that the DIA oversees). All I’m urging is, please don’t view this latest activity in isolation.
DIA: Public Consultation: Safer Online Services and Media Platforms. Deadline: July 31, 2022.
In a nutshell, the old regulatory tropes are in play.
That the regulator will be independent.
That the codes of practice will achieve ‘safety objectives’.
Regulatory power would not merely extend ‘interventions’ to the management and pulling of controversial content by social media and news providers, but would extend to the placing of warning labels and algorithms that might limit content for lower risk content.
What the heck is lower risk content? Who is the moral overlord that ascertains when an issue is ‘safe and inclusive’? Anything that might confront political or financial power has potential to be contested. We know governments persistently support political endeavours that fulfil the objectives of institutional stakeholders. We know that when there is injustice and when people are persistently unheard that they become angry and frustrated. This poppycock elides such human complexities.
This is a hydra of a proposal. Like too much legislation nowadays, unaccountable secondary legislation is key. And who is in control? The DIA.
The DIA enlightens us in their ‘consultation document’:
‘We’ll do this by creating codes of practice that set out specific safety obligations for larger or riskier platforms. These codes will be enforceable and approved by an independent regulator. The codes will cover things like how platforms should respond to complaints and what information they should provide to users.’ (p.6)
In a nutshell, the codes of practice would be set by the regulator outside the Parliamentary process. The regulator is claimed to be at arm’s length from government. New Zealand regulators are never granted powers of enquiry that ensures that they can contradict political goals of government. So, the codes of practice would fulfil the political goals of government.
The DIA claim the codes (and ‘interventions’) only apply to platforms and industry. Yet ultimately civil societies’ online content would be regulated through the arrangements with industry as a consequence of the codes. This is government regulation of public speech by proxy, with platforms and industry acting as gatekeepers.
The DIA claim that it is important that we act now - because:
New Zealand is at risk of falling behind the protections that other like-minded nations are providing.
The Free Speech Union (FSU) calling this an ‘online censorship plan’, view the DIA’s proposal as overreach. FSU maintain effective regulations are already in place. In a recent interview. NewstalkZB wondered whether consistency was needed across media. FSU’s Jonathon Ayling reasoned that, for example, editorial content from the New Zealand Herald should be regarded differently from private tweets. Therefore, there was no need to be consistent across different forms of communication. Ayling emphasised that extremist harmful content such as terrorist activity, child pornography and incitement to harm was already illegal.
In this world, our agencies keep making new laws, and ignoring existing ones.
The DIA are doing the same, and it would be the DIA who would administer the legislation. It’s a power-grab. As the FSU states:
New Zealand already has significant laws in this area, such as the Harmful Digital Communications Act (which already goes too far). While there are problems with the way minors interact online, or the material they consume, is regulation really the answer?
What appears to be off the table in the ‘consultation document’ are more nuanced issues that require open debate – controversial and uncomfortable issues that inevitably revolve around principles and values that require open debate. Or – as I constantly point out – debate around issues for which there are pervasive political and financial conflicts of interest, and where institutional power vastly exceeds the power of the individual.
Ayling emphasised that such codes of conduct would likely suppress public debate. Democracy is dependent on open debate so as to develop resilience and promote counter speech.
Moreover, as the ‘consultation document’ points out, more nuanced issues would likely be managed by algorithms:
‘For lower-risk content, consumers could see more warning labels and content advisories, and there could be changes to the way algorithms recommend content so that harmful content is not actively pushed to users.’
New Zealand’s Disinformation Project persistently fails to address the issue of corporate and global power and the provenance of scientific information and related claims as to what is disinformation or misinformation. Stuff’s Tom Pullar-Streckar noted this grey area:
The discussion document appeared to tread lightly over the topics of misinformation and disinformation
We know Department of Prime Minister and Cabinet (DPMC) are keen to control information, otherwise they wouldn’t be funding the Disinformation Project. The DIA’s ‘consultation’ reflects DPMC priorities.
The ‘consultation document’ states that:
Interventions should be reasonable and able to be demonstrably justified in a free and democratic society. This includes:
Freedom of expression should be constrained only where, and to the extent, necessary to avoid greater harm to society.
How would that work? We have the case study at hand. In the last two years, not-at-risk of COVID-19 people were forced to accept an injection of a novel biotechnology in order to retain paid employment and pay their rent/mortgages. Ministers Hipkins and Verrall did not focus on risk of hospitalisation and death from COVID-19 as a prerequisite, but focussed, ridiculously, on ‘elimination’ of infection.
Cabinet policy was that the whole of New Zealand would be injected. The DIA’s mooted censorship regime would ensure broad ‘interventions’ to dissenters as the government’s policy position was that anyone who rejected the technology posed a ‘greater harm to society’.
This latest action by the DIA cannot be considered in isolation.
Digital infrastructures which ultimately extract power from civil society is occurring at scale and pace. Perhaps I am conflating unrelated issues. I don’t believe so.
We’ve already seen the DIA performing public consultation where public comments are vastly ignored in the final legislation.
Last month the Digital Identity Services Trust Framework Act 2023 was released. The new regulator is toothless. There is no foresight nor oversight built into the legislation. We witness habits and patterns that ensure laws provide permissions for technologies without providing obligations and resources for regulators to proactively steward them to prevent direct and off-target harm, including the abuse of power.
Now they want, in line with other so-called representative democracies, to decrease our powers, our capacity to critically debate and discuss issues of political significance. The New Zealand Free Speech Union summarised their points of concern:
In essence, this is what you need to know:
-The Department of Internal Affairs is going to release a consultation document which proposes to have a law drafted which would establish a new ‘Regulator’ for online content;
This Regulator would have broad powers, far more significant than any that exist at the moment over the content you put up on social media or other platforms. (Even the Free Speech Union’s updates and emails would be subject to the Regulator’s reach, as our ‘platform’ is larger than 25,000 users- we don’t think they should have a say on what our defence of free speech looks like).
Codes would be drafted, which would outline what content, material and speech are allowed. But Parliament won’t draft the Codes. In fact, there is no representative accountability over what is included in the Codes at all.
The draft law would just establish the Regulator, with the broad responsibilities of the Codes. Away from Parliament, the Select Committee process, and from your right to engage with politicians or vote out those you disagree with, industry, NGOs, and academics will write the codes which dictates what you’re allowed to say online.
They advise that the penalty for platforms that do not comply with takedown notices should be increased to ‘reflect the seriousness of non-compliance’. Currently, it’s $200,000 for each incident of non-compliance.
This is all part of the attempts by the Government to control information and the narrative. In their definitions of safety and harm, the DIA claim that ‘Content can cause harm to wider society. This might look like individuals or communities losing trust in, or access to, key public institutions such as the legal, health and education systems, [and] freedoms of identity…’
We have till July 31 to submit on this consultation. They expect legislation to be introduced to Parliament next year.
The DIA is amassing extraordinary power over the digital landscape.
NZ’S DIGITAL ECOSYSTEM TSAR: The Department of Internal Affairs
The DIA oversee our Real Me identity system as well as the legislation that oversees this. Information sharing arrangements have lowered the barriers to behind the scenes sharing of civil societies information. No regulator has funding to check out whether abuses of power might be happening. Understanding how this is occuring crescively, bit by bit, but might lead to a rights ‘crunch’ is completely out of scope.
Increasingly, it is very much easier to get a government job, or access tertiary education if you have a Real Me identity.
The DIA administer the brand new Digital Identity Services Trust Framework Act 2023
This Digital Identity Services Trust Bill, introduced by David Clark in August 2021, received Royal Assent last month (April 2023). PSGR, along with over 4,000 people and organisations, registered their concerns.
The Act promises to achieve
‘provision of secure and trusted digital identity services for individuals and organisations’
PSGR considers that the governance/regulatory system is too narrow in focus. I.e., the regulatory framework simply is unfit for the complex purpose that it is meant to achieve. The DIA privately consulted with institutional stakeholders before drafting the Digital Identity Services Trust Framework Act. Therefore, as PSGR stated, there was no evidence that
‘anticipatory regulation will occur to actively prevent harm. It is not evident that these institutions will have adequate powers of scrutiny, and the regulatory teeth, both to anticipate error and malfeasance, and to monitor and analyse the boundaries of risk and prevent harm before the harm has occurred.’
The public are left, in the final legislation, to contest an injustice on a case-by-case basis.
What we also observe in the creation of new Bills –the writing out of non-industry stakeholders in any consultations before the Bill is presented in Parliament, and then, the dismissal of public concerns during consultations to the Bill.
When it comes to digital information privacy is persistently flagged as a top concern. Yet this can mislead us to exclusively focus on privacy. Privacy discussions tend to exclusively concern involvement between the public and the supply of personal data to corporate providers such as retail banks. Privacy concerns do not extend to back-end sharing between government agencies.
What is not in scope? Long-term erosion of human rights.
What happens when the DIA is the administering agency for this new Act – and dually oversees our private RealMe information, enabling the linkage of this information through approved information sharing agreements (ASIAs) across government platforms.
Is there potential for abuse of power and restriction of autonomy and rights?
I believe so.
I wrote in a Substack post last year:
The Department of Internal Affairs administers the Electronic Identity Verification Act 2012 and the Identity Information Confirmation Act 2012, while the Ministry of Justice administers the Privacy Act. The Department of Internal Affairs has oversite over a tremendous quantity of information. There is little public oversight concerning backend operations of the DIA's identity verification service.
It’s decoupled from RealMe, which is the public interface, but what happens behind the scenes is black boxed. It’s unclear how information held with the intelligence and law enforcement agencies is aggregated with DIA data. Schedule 2 Privacy Act lists current ASIAs. The first year of the pandemic coincided with an increase in ASIAs.
One assumes that the Electronic Identity Verification Act 2012 sets the standards for the DIA's identity verification service – however the Acts purpose seems to focus on facilitating secure interactions between individuals and agencies. This is quite different from ASIAs, which are agency to agency. As the Act clarifies in Section 39 – individuals that sign up to an identity scheme can’t prevent cross-agency information transfers when government agencies are parties to an ASIA.
Identification, or identity proofing,
‘verifies and validates attributes (such as name, birth date, fingerprints or iris scans) that the entity presents.’
An individual may share attributes with a broader digital identity user population. Personal attributes can include data points such as IP addresses, activity history and location. Theoretically the individual has control over what attributes are shared, but inter-agency government agreements over-ride this.
It appears that the Privacy Act does not limit the collection of biometric or identity information, and the principles are weakly worded. The Electronic Identity Verification Act 2012 focusses on secure online interactions, and the Digital Trust Framework Bill concerns the governance of service providers such as RealMe as well as the DIA's identity verification service.
This is why separation of powers to review information sharing activity is critical.
Yet the underfunding of regulatory powers is already evident in the resourcing provided to the Privacy Commissioner with regard to oversight and compliance. Compliance and enforcement is a new primary activity of the Privacy Commissioner, and these activities were previously lodged under the Information Sharing and Matching output class. With only a NZ$2 million p.a. budget and a team of four dedicated staff engaged in active compliance and enforcement to investigate systemic problems, it’s impossible to expect that the Privacy Commissioner has oversight on what is happening behind departmental doors with regards to private citizen information, across a government sector with a NZ$100 billion budget.
There is no meaningful stewardship scheme beyond a small $2 million budget held with the Privacy Commissioner, and new legislation coming through the door is absent inquisitorial power.
But wait… there’s more - flexibility for corporations.
The Digital Identity Services Trust Framework Bill has been
‘drafted to govern and provide an opt in accreditation framework for the ‘Trusted Framework’ providers, the public and private institutions that would supply digital identity services.
Once the Trust Framework is in place the government would take steps to accredit RealMe services. It’s problematic that the DIA oversees both RealMe and the governance system that is meant to accredit it.
The board has no investigative powers. The board is not responsible for reviewing the global environment to assess potential short or long term risks, in order to triangulate New Zealand’s rules and operations with global challenges and new knowledges. Under section 93, the authority could suspend or cancel an accreditation due to an act or omission that may pose a risk to
‘the security, privacy, confidentiality, or safety of the information of any trust framework participants or the integrity or reputation of the trust framework’.
However, with no inquisitorial powers, and importantly – resourcing for the authority to review the global environment in order to identify potential harms, it is unlikely the board’s efforts will be anticipatory.
As I discussed last year it seems the Privacy Commissioner really can’t do much to protect us from government over-reach:
The Privacy Commissioner is charged to protect the privacy of individuals. In addition to education and encouraging the reporting of incidents, staff have a nominal budget of NZ$2 million for active compliance and enforcement. The Privacy Commissioner is not looking under the hood to check whether agencies are conducting themselves responsibly with their handling of private data.
The sharing of the biometric and digital data of citizens is operational across New Zealand government agencies and permitted by the Privacy Act 2020. Webbed networks of digital information sharing are already occurring in New Zealand through approved information sharing agreements (ASIAs) across government platforms. ASIAs have increased since the start of the pandemic. It’s the backend sharing of data that ordinary Kiwis don’t see.
(The Privacy Commissioner recent held a consultation on privacy regulation of biometrics, and while this was widely covered by consultancy firms; the legacy media didn’t report that this was happening.)
A mooted Consumer Data Right Bill, overseen the Honorable Dr David Parker will join this legislative framework. As Clark has explained:
A consumer data right (CDR) is a mechanism that requires data holders, such as banks and electricity retailers, to safely and securely share data with third parties (like fintech companies) following consent from the customer.
Not surprisingly, the Fintech industry can’t wait. It’s difficult to understand where the Privacy Act stops, and this Bill might start.
Then we have RealMe, the front end of New Zealand’s digital identity system – the public login service. A facial photo is required using a facial recognition system called Identity Check. RealMe is a mandated all-of-government ICT Common Capability, ‘it’s a technology that can be used by 1 or more agencies, or across all-of-government, to support business outcomes.’
The backend is the verified personal information that is held by the Department of Internal Affairs (DIA). It is maintained and developed by Datacom. Currently, the biometric data held by the DIA includes facial images and liveness testing. The liveness test is in the form of a video.
The DIA’s resources and operations have expanded considerably in the years 2011-2022. In 2011, total appropriations were $268,239,000. In 2022 the budget sits at $1,223,005,000. The DIA annual income has increased by a billion.
What’s also a little, well, whiffy, is the fact that the Department of Internal Affairs (DIA) is the department responsible for the back-end management of personal data, the administration of the Electronic Identity Verification Act 2012 which includes RealMe – but then they’re also planning to have oversight of the proposed Digital Identity Services Trust Framework Act.
And of course, the DIA already has a bundle of contracts with corporations as well.
A digital drivers licence is in play. Of course, police have access to driver data digitally now. But this would integrate biometric facial recognition data and contain more information which, presumably might then be accessed by other agencies in ASIAs. The DIA is leading the biometric database work which would enable the digital drivers licence functionality.
Of course, the economic and social benefits of digital identity is estimated to be between 0.5 and 3 per cent of GDP – so roughly $1.5 to $9 billion in NZD. A mere $2 million for the Privacy Commissioner is pitiful, and no apparent budgetary requirement is set aside as a foresight measure for the digital trust framework.
Civil society has been remaindered outside of the policy development stages, and then largely dismissed. Once new frameworks are in place, regulators that are underfunded and absent an obligation to conduct active enquiry, can only provide a smokescreen of legitimacy.
Global actions consistently expand power for digital providers, while reducing public oversight & limiting avenues for appeal.
The DIA’s digital censorship consultation is meshed/entwined across an increasingly opaque global digital environment.
This represents a colossal power-shift.
Corporations are interlinking - at speed and pace - with government agencies across government to provide digital services which consistently involve public information.
Such arrangements are inherent in censorship regulation scheme where the institutions patrol content. But these arrangements would be out of scope to the public.
It’s not just the DIA – but it could involve relationships with the DIA and other institutions.
What happens when corporate providers, for example, work with, the DIA in a censorship capacity; or some other capacity (software, algorithm provision) as an interface provider to ensure information can be toggled and made available.
RESERVE BANKS, THE IMF & CBDC ‘CAPACITY DEVELOPMENT’
These arrangements are mooted by Reserve Banks. Their services would be ostensibly outside Parliamentary – and democratic - processes.
For example, the IMF is currently coaching Reserve Banks (RBs) on transitions so that RBs can embed technologies to enable smart contracts for programmable central bank digital currencies (CBDCs) – this would ensure RBs substantially increase their political power.
As the Reserve Bank of England has outlined:
Smart contracts are pieces of code which are able to self‑execute payments based on some pre‑defined criteria.
Who would provide such services?
‘private sector Payment Interface Providers would connect in order to provide customer‑facing CBDC payment services. Payment Interface Providers could also build ‘overlay services’ — additional functionality that is not part of the Bank’s core infrastructure, but which might be provided as a value‑added service for some or all of their users.’
The IMF describes this as ‘Central Bank Digital Currency Capacity Development’ and is leading/stewarding the processes and effectively directing the frameworks that will control this through a proposed CBDC Handbook. A euphemism for the expansion of central bank powers. But really, it might just be, as ex-head of the China Reserve Bank, and now IMF Deputy Managing Director Bo Li mentions – be about capacity development of the IMF itself
‘IMF capacity development activities involve multiple stakeholders, and we want to be collaborative and inclusive. Our donors, external experts, the recipient countries, and the IMF, are equal partners.’
And, oh my, the New Zealand Reserve Bank just completed a 3-part consultation, where the majority of submitter’s focussed on CBDCs… and many focussed on the potential for abuse of human rights. Their little process chart in the summary document didn’t highlight that the upholding of democratic power through Parliamentary processes, or that the protection of human rights should be key design principles.
This is the release and management of currency for heck’s sake. It is supremely political.
I’m not sure that this will be built into the IMF’s CBDC Handbook (or instruction manual on how to do it according to the IMF).
Whether the information flow through digital technologies administered by the DIA , or through technologies administered by Reserve Banks, they are ultimately locked inside government contracts (and secrecy provisions), is accessible to those government and Reserve Bank partners and stakeholders, including large global corporations.
These institutions can interlink – couple in such a way as to increase and sustain their power through actions which limit or prevent public objection/contestation.
SEE MY CBDC INTERVIEW ON REALITY CHECK RADIO WITH PAUL BRENNAN
Smart contracts might involve for example, access to income and/or benefits based on compliance (such as keeping up to date with vaccine schedules) and behaviour (harmony with government priorities on social media, harmonising with ‘climate’ goals through actions and words).
That the DIA that is now proposing a big new censorship regulatory scheme, using an unaccountable regulator. The DIA oversees the brand new and unaccountable Digital Identity Services Trust Framework Act.
The Act includes provision for Orders in Council, the production of secondary legislation that can be made quickly and in secret by a small cohort of government Ministers.
New Zealand might need to wake up, and very quickly.
"New Zealand might need to wake up, and very quickly."
It'll take a miracle my friend. The plandemic showed how a vast majority of zzz's submitted willingly to tyranny. NZzzz slumbers in a deep state of moral, social and intellectual narcosis.