Study Tour – Kasper – Politiken

AI Represents a “Genomic” and “Stochastic” Paradigm Shift Requiring Organizational Rewiring

The core premise here is that AI is not merely another software update or a continuation of standard digitalization; it is a fundamentally different class of technology that behaves probabilistically rather than deterministically. Because of this, its adoption follows a “normal” technological S-curve (like electricity), but it creates a friction point where exponential technological growth outpaces linear organizational adaptation.

Elaboration

The Nature of the Beast: Genomic and Stochastic

To understand the gravity of the current moment in media technology, one must first understand the specific definitions applied to Artificial Intelligence in this context. The speaker introduces two critical descriptors for AI: it is “genomic” and it is “stochastic.” These are not buzzwords but architectural definitions that dictate how a media company must react.

When the speaker describes AI as a “genomic” technology, they are arguing that it is foundational and pervasive. Much like a genome determines the expression of traits across an entire organism, AI has the potential to impact every single aspect of the media value chain. It is not limited to a specific vertical, such as the printing press (distribution) or the word processor (production). Instead, it permeates the entire ecosystem—from the backend metadata tagging and the research phase of journalism to the actual writing, the distribution via algorithms, and the commercial optimization of paywalls. This “genomic” quality means that treating AI as a siloed experiment in a corner of the IT department is a strategic failure. It must be retrieved and applied by the domain experts across the entire organization, essentially rewriting the DNA of how the company functions.

However, the second descriptor, “stochastic,” presents the primary challenge to this integration. For the past 30 years, digital transformation in media has been built on rule-based, deterministic computing. If you input $X$ into a CMS, you get $Y$ output every single time. It was logic-driven and predictable. AI, specifically Large Language Models (LLMs) and Generative AI, is stochastic—meaning it is probabilistic. It predicts the next token or action based on statistical likelihood, not rigid rules. This fundamental shift creates massive friction in organizations built on accuracy and facts. As the speaker notes, this is why “some people are really angry with this technology.” Editors and journalists, whose professional identities are forged in the fire of factual accuracy, struggle to tolerate a tool that “sometimes makes mistakes” by design. The mindset shift required here is moving from “this tool is broken because it erred” to “this tool is probabilistic; how do we manage the variance to capture the value?”

The S-Curve and the Three Camps

The argument further contextualizes this shift within historical technological adoption. The speaker rejects the extremes of the current discourse. On one side, there are the “Skeptics,” who dismiss AI as a bubble destined to burst. On the other, the “Exhibitionists” or accelerationists believe we are on the verge of Superhuman Intelligence (AGI) that will render all current structures obsolete by 2027.

The speaker posits a “Realist” view: AI is a “normal technology” that follows a standard S-curve of innovation, similar to electricity or the internet. It took 30 years for electricity to move from industrial invention to household ubiquity. While AI adoption is accelerating faster than electricity did, it is still a process of gradual maturation. We are currently in the transition from “early experimentation” to “industrial application.” This view is crucial for a legacy media house because it dictates patience. It suggests that the revolution will not happen overnight, but ignoring it is fatal. It frames AI not as magic, but as infrastructure that will eventually become as invisible and essential as the power grid.

The Friction of Exponential Tech vs. Linear Humans

Perhaps the most profound insight in this argument is the recognition of the speed mismatch. AI development is exponential; the capabilities of models like GPT-2 (2019) compared to GPT-4 (2023) represent a vertical leap in capability. However, human organizations change linearly. “Realizing the value of humans needs to change organizations… that is not an exponential process,” the speaker argues.

This creates a strategic tension. The technology is capable of revolutionizing workflows immediately, but the workflows themselves are entrenched in centuries of habit (JP/Politiken has brands dating back 150+ years). The “rewiring” of the organization—changing how teams collaborate, how stories are pitched, and how products are defined—takes time. Consequently, the organization is currently focusing on “short-term potential”—augmenting existing efficiencies—because that is what the linear organization can digest right now. The “long-term potential,” which involves completely reimagining the news product, requires a level of organizational plasticity that has not yet been achieved. The argument concludes that the true barrier to AI in media is not the quality of the algorithms, but the adaptability of the sociology within the newsroom.


The “Centralized Innovation vs. Decentralized Application” Paradox

The core premise is that successful AI implementation in a media group requires a delicate, fluid organizational structure. It demands a centralized unit to achieve critical mass, technical excellence, and scalability, balanced against decentralized collaboration to ensure tools are actually adopted by journalists. Managing the “drift” between these two poles is the primary management challenge.

Elaboration

The Evolution from Lab to Infrastructure

The speaker traces the organizational journey of JP/Politiken’s AI efforts to illustrate a common maturity model in corporate innovation. It began in 2019 with “Instablight,” an innovation-focused, risk-willing setup funded partially by external grants and university partnerships. This was the “Lab Phase.” At this stage, the goal was not efficiency or profit, but understanding. They needed to play with the technology (like GPT-2) to understand its ethical implications and potential trajectory.

However, as the technology matured from a curiosity to a viable business tool, the organization had to shift. The “Lab” became the “Central AI Unit.” This shift represents a move from exploration to exploitation. The current unit consists of 17 people, including AI specialists, engineers, front-end developers, and hybrid product managers. The argument made here is that a single news brand (even a large one) rarely has the resources to hire high-level data scientists and ML engineers. By centralizing this talent at the group level, the company creates a center of excellence that can serve multiple brands (Denmark, Norway, Sweden, Germany). This provides the “critical mass” necessary to build robust, scalable infrastructure rather than fragile, one-off prototypes.

The Tension of Centralization

The speaker is acutely aware of the pitfalls of this centralized model, specifically referencing the “drift” or the “ivory tower” effect. Centralized IT and product teams have a historical tendency to disconnect from the reality of the end-user. They build tools that are technically impressive but practically useless in a fast-paced newsroom. The speaker notes, “I used to be in the bands criticizing central IT… so it’s top of mind for me.”

To combat this, the group has developed a specific engagement model. They do not simply throw software over the wall. Instead, they rely on “bridge builders”—product managers with dual backgrounds in journalism and technology. Furthermore, they rely heavily on middle management in the newsrooms. The argument is that top-level strategic alignment is often too abstract, and ground-level journalist interaction can be too scattered. The “sweet spot” for collaboration is the editorial middle management—the section heads and digital editors who understand both the editorial mission and the production bottlenecks.

Scalability vs. Customization

This argument delves deep into the technical philosophy of the group. If you decentralize fully, you get “lots of pilots” but nothing scales. Every brand builds their own headline generator, and they all break when the API changes. If you centralize fully, you get a generic tool that fits nobody.

The proposed solution is a “Centralized Infrastructure, Decentralized Interface” model. The AI Unit builds the heavy lifting—the vector databases, the API wrappers, the prompt engineering libraries, the compliance guardrails—centrally. This is the “backend” mentioned in the transcript. However, the application of this infrastructure is customized for the specific brands. The “Magnet” platform acts as this middleware.

The “Transfer” Gap

A significant portion of this argument addresses the difficulty of “Technology Transfer.” A question from the audience highlights a universal pain point: The AI team builds a prototype (e.g., a video spotter), but the Product Team (who manages the CMS and website) refuses to implement it because it’s not on their roadmap.

The speaker acknowledges this as an unsolved friction. Prototypes are easy; production is hard. A prototype can run on a laptop; a production tool needs 99.9% uptime, security compliance, and distinct SLAs. The argument here is that the AI unit must resist the urge to stay in “prototype mode.” They must accept the slower, linear process of hardening tools for production. This requires a “handoff” discipline where the AI team effectively acts as a vendor to the internal product teams. The ultimate vision regarding organization is actually the dissolution of the AI unit. The speaker argues that in the long term, “The intention is to close down the central AI units.” Why? Because eventually, AI will just be “tech.” There won’t be an “AI team” just as there isn’t an “Electricity Team” or an “Internet Team.” AI will be embedded into the skills of the normal developers and journalists. However, the argument closes with a caution: we are not there yet. Closing the unit now would be premature because the technology is still evolving too rapidly for generalists to manage.


The Three Pillars of Value (Metadata, Recommendations, and GenAI)

The core premise is that AI value in media is not monolithic. It is derived from three distinct but interconnected pillars: Metadata (structure), Recommender Systems (distribution), and Generative AI (creation/augmentation). While GenAI gets the hype, Metadata is the foundation, and Recommendations provide the immediate commercial uplift.

Elaboration

Pillar 1: Metadata – The Unsung Hero

The speaker explicitly identifies metadata as the “least sexy area” but implies it is potentially the most critical for long-term success. In the legacy print world, content was ephemeral; it was published and then forgotten in physical archives. In the digital AI era, content is an asset that needs to be retrieved and recycled.

The argument is that manual tagging by journalists is unreliable. Journalists want to write stories, not fill out form fields about “entities” and “topics.” Therefore, the AI Unit utilizes Natural Language Processing (NLP) to automate the generation of metadata. This includes extracting entities (people, places, organizations), determining topicality, and creating vector embeddings (mathematical representations of the text’s meaning).

Why is this a core argument? Because without high-quality, AI-generated metadata, the other two pillars fail. You cannot build a “Content-Based Recommender System” if the machine doesn’t understand what the content is. You cannot build a “Retrieval Augmented Generation” (RAG) tool for journalists to research archives if the archives aren’t searchable by concept. The metadata strategy is the prerequisite for the “activation of content” both editorially (finding old context) and commercially (selling targeted ads based on content semantics rather than user tracking, which is crucial in a cookie-less world).

Pillar 2: Recommender Systems – The Algorithmic Editor

This section contains a sophisticated argument about the role of algorithms in editorial decision-making. The speaker frames the core role of a publisher as “balancing what readers want to read and what we deem journalistically important.” In print, this was the front-page mix. In digital, it was the homepage flow. In the AI era, it is the “Personalized News Flow.”

The argument presents a dichotomy between two algorithmic paradigms:

  1. Collaborative Filtering: “People like you read this.” This creates a “popularity bias.” It drives high clicks but creates echo chambers and narrows the news agenda to only the most sensational stories.
  2. Content-Based Filtering: “You read about climate change, here is more about climate change.” This respects the user’s specific interests but creates a “niche bubble” and ignores the wider news agenda.

The speaker presents A/B test data showing that while both methods increase engagement (Click-Through Rate), they produce radically different news flows. The “Commercial Argument” is clear: personalization drives subscription sales (proven by the test showing a massive uplift in conversion when using recommenders vs. manual curation). However, the “Editorial Argument” warns against letting the algorithm take over completely. The strategy, therefore, is a hybrid: using recommender systems to optimize the “churn” and “conversion” metrics while manually overriding or tuning the algorithms to ensure the “public interest” mission is maintained. They are moving toward a system where AI determines the relevance of a story to a user, but the Editor determines the importance of the story to society.

Pillar 3: Generative AI – Augmentation, Not Replacement

Regarding GenAI (ChatGPT, etc.), the argument shifts to “Human in the Loop.” The speaker details the progression from GPT-2 (useless) to GPT-3 (experimental) to current models (production). The key insight is the deployment of “Magnet” (or the internal platform) which serves as a wrapper around these models.

The argument is that GenAI in news is currently best suited for “transformative” tasks—summarizing, headline writing, spell-checking, and formatting—rather than “generative” tasks like writing original news from scratch. The “Hallucination” problem is cited as the primary barrier. To solve this, the group enforces a strict architectural pattern: RAG (Retrieval Augmented Generation). They do not ask the AI to “write a story about the budget.” They feed the AI the specific reporter’s draft and ask it to “summarize this specific text into a headline.” By grounding the AI in proprietary, verified content (the “Grounding” strategy), they minimize the risk of fabrication.

Furthermore, the argument distinguishes between “efficiency” and “creativity.” Currently, the tools are sold on efficiency (saving time). But the speaker hints at a future “Research” phase where AI acts as a highly capable intern, parsing thousands of PDF pages to find contradictions or trends. This moves GenAI from a “writing assistant” to a “investigative tool,” which is a higher-value proposition.


Core Argument 4: The Future is “Self-Service” and Navigating the “Trough of Disillusionment”

The final core premise concerns the sustainability of AI innovation. The current model of a central team building tools for journalists is a bottleneck. The future requires democratizing AI creation—allowing journalists to build their own agents—while navigating the legal, ethical, and transparent definitions of “AI-generated content.”

Elaboration

Breaking the Bottleneck via Democratization

The speaker identifies a critical flaw in their current success: they are becoming the bottleneck. As the newsrooms realize the power of AI, the demand for custom tools (e.g., “I need a bot that summarizes municipal meeting minutes” or “I need a tool that converts sports scores to text”) outstrips the capacity of the 17-person central unit.

The argument focuses on a shift toward “Self-Service.” This involves moving from building products to building platforms. The AI unit intends to provide a toolkit—likely a no-code or low-code interface on top of their “Magnet” infrastructure—that allows advanced journalists (super-users) to prompt-engineer their own solutions. This mirrors the “Citizen Developer” trend in broader IT. If a journalist can build their own “research agent,” the innovation speed increases exponentially. This requires a culture shift where journalists are not just consumers of software but creators of it. It validates the earlier “Genomic” argument: if AI is truly genomic, every cell (journalist) should be able to express it.

Transparency and Ethics as a Product Feature

The presentation touches on the “Transparency Paradox.” When do you tell the reader AI was used? The argument offered is pragmatic. If AI is used for spell-checking or basic grammar (low-level transformative tasks), no label is needed—just as we don’t label stories “Checked by Microsoft Word.” However, if AI plays a “massive role” in the structure or content generation, or if there is “no human in the loop” (which they currently avoid), labeling is mandatory.

This is not just an ethical stance but a brand survival strategy. In a world flooded with synthetic, low-quality AI slop, the “Human Verified” stamp becomes a premium asset. The argument implies that legacy media’s value proposition shifts from “we have information” to “we have verified information.” Therefore, their AI strategy is designed to enforce human oversight. They are building tools that force the journalist to review the output (e.g., the headline suggestion tool requires a selection, it doesn’t auto-publish).

The Next Wave: Agents and Audio

Finally, the argument looks at the technological horizon. The speaker notes that while “text generation” is maturing, “Agents” (AI acting on information, not just summarizing it) are the next frontier. This aligns with the prediction that AI will move from “Read” to “Act.” An agent won’t just summarize a press release; it will email the press secretary for a comment, check the archives for contradictions, and draft the article.

Additionally, “Audio” is highlighted as an under-exploited vector. The ability to transcribe, translate, and voice-clone allows for massive content recycling (e.g., turning a Danish text article into a German audio briefing). This speaks to the economic necessity of the group: expanding into new markets (Germany, UK) without hiring armies of new reporters, but by leveraging AI to translate and transmute existing IP across borders.

Conclusion on the “Questions We Struggle With”

The argument concludes with humility. The speaker admits they haven’t solved the “Transfer” problem, the “Strategic Alignment” problem, or the “Pacing” problem. This open-ended conclusion serves as an argument in itself: AI in media is not a solved science; it is a series of ongoing experiments where the only wrong move is standing still. The transition from “Innovation Lab” to “Core Business” is messy, linear, and fraught with cultural resistance, but it is the only path forward for a 150-year-old institution surviving in the 21st century.

Leave a Comment