Study Tour – Troels Behrendt Jørgensen – Politiken

The Strategic Necessity of Custom-Built, Democratized Data for Cultural Change

The first and perhaps most foundational argument presented by the speaker is that for a news organization to truly become data-driven, the tools must be custom-built to fit the specific culture of the brand, and the data must be democratized across the entire organization to fuel curiosity rather than enforce management targets. This argument rejects the reliance on generic, third-party “black box” analytics tools in favor of a transparent, in-house ecosystem that aligns with the newspaper’s identity as a team-oriented entity.

The Rationale for In-House Development

The speaker opens by acknowledging a counter-intuitive reality: the tools they have built are “not professional” in a commercial sense. They are rough around the edges, the code is “homemade,” and it is not something they could package and sell to other publishers. However, this lack of commercial polish is framed not as a weakness, but as a strategic strength. By building the system in-house—a project involving two developers working on and off for six or seven years—Politiken ensured that the metrics measure what matters to them, not what matters to Google or Facebook.

The speaker explicitly notes that while third-party tools might be “way better” technically, the in-house tool is “very, very close to what we think Politiken is and should be as a brand.” This is a crucial argument for sovereignty over metrics. When a newsroom uses a standard dashboard like Google Analytics, they are subtly incentivized to chase the metrics that specific platform prioritizes (usually raw clicks or unique users). By coding their own solution (using their own data collectors and AWS Cloud infrastructure), Politiken tailored the feedback loop to support their specific journalistic mission. The tool isn’t just a counter; it is a cultural enforcement mechanism that reminds the newsroom of their specific definition of success.

Total Transparency as a Catalyst for Trust

A major pillar of this argument is the concept of the “open system.” Troels emphasizes that “there are no secrets in it.” This radical transparency extends to sensitive business data that is traditionally siloed in the commercial or advertising departments, such as subscription sales figures and total revenue drivers. In many traditional media organizations, there is a “church and state” separation where journalists are shielded from the financial realities of their stories to prevent commercial bias. Politiken argues the opposite: that accessibility to this data is vital.

By making data on sales and subscriptions available to everyone, from the editor-in-chief to the newest intern, the organization removes the mystique and fear often associated with analytics. The data is visible directly on the website via an overlay, not hidden in a separate viewer or login portal that requires a distinct effort to access. This proximity is key; the data lives where the work lives. The speaker notes that they expect staff not to leak this data outside the building, but inside the building, the walls are down. This fosters a sense of collective ownership over the business model. When a journalist sees that a specific story drove subscriptions, they understand their direct contribution to the newspaper’s survival, bridging the gap between editorial quality and business viability.

Curiosity Over Coercion

Perhaps the most nuanced part of this argument is the psychological approach to data adoption. The speaker explicitly rejects the “top-down” management style often associated with data transformation. The goal is not to have a manager “standing on a chair telling everyone you need to have better figures.” Instead, the argument is that the “fuel for this is curiosity.”

The system is designed to answer the natural questions a writer has: “How did my story fare?” and “What could I have done to make it better?” By framing the data as a tool for personal and professional growth rather than a performance review metric, the organization reduces resistance. The speaker notes that if the drive to improve comes from the journalists themselves, it changes the organization “faster” than mandates from leadership. This is a behavioral argument: intrinsic motivation (curiosity) yields better results than extrinsic motivation (managerial pressure).

Team Effort vs. Individual Stardom

Finally, this argument addresses the specific cultural values of Politiken. The speaker makes a point of stating that their culture promotes “team effort over individual effort.” Consequently, the data system is deliberately designed not to produce “leaderboards” or lists of the “best journalists.”

In many data-driven sales or media environments, gamification involves ranking employees to spur competition. Politiken rejects this. “You cannot be on top of this week’s best journalist list—that’s just not who we are,” the speaker says. The data focuses on the content (the article, the podcast, the e-paper), not the creator. This prevents the toxic environment where colleagues compete for homepage placement or social media promotion to boost their personal stats. The focus remains on the collective output of the newspaper. The dashboard shows how the International Desk performed or how the Front Page performed, fostering a sense of communal responsibility.

Technological Independence and Privacy

Underpinning this cultural argument is a technical and ethical one regarding GDPR and user privacy. The speaker notes that they built their own data collector residing on the website because they need to own the technology. This allows them to process and transform data via automated pipelines into an “Insights Suite.” By relying on their own technology, they navigate the complex landscape of user consent (locked-in vs. anonymous users) without ceding control to third-party tech giants. This reinforces the “custom-built” theme: they control the data pipeline from collection to presentation, ensuring it serves their specific ethical and operational standards.

In summary, the first core argument is that the most effective data strategy is one that is homemade, completely transparent, and culturally aligned to prioritize curiosity and teamwork over competition and top-down control.


The Superiority of Composite, Weighted Metrics (“Engagement Score”) Over Vanity Metrics

The second core argument revolves around how success is measured. The speaker argues for the rejection of singular, “vanity” metrics (like page views) in favor of a complex, home-grown, composite algorithm—the “Engagement Score.” This argument suggests that the true value of journalism cannot be captured by a single number, but requires a multi-faceted view that weights deep engagement and financial conversion higher than mere traffic.

The Flaw of Single-Metric Tracking

The speaker critiques the traditional method of looking at “one KPI at a time.” In many newsrooms, a story is deemed successful simply if it gets a lot of clicks. However, the speaker demonstrates that Politiken has moved beyond this. They explicitly state that in their homemade algorithm, “page views ranks relatively low.” This is a significant philosophical stance. It implies that a user simply landing on a page is of low value if they do not stay, read, or convert.

By de-prioritizing page views, the organization disincentivizes “clickbait”—headlines that promise much but deliver little. If the system prioritized page views, journalists would be encouraged to write sensationalist headlines. By ranking page views low, the system encourages substance.

The Composition of the Engagement Score (The Spider Web)

The core of this argument is the “Engagement Score,” a number between 0 and 6. The speaker describes this as a “homemade algorithm” that compresses user behavior into a single digestible figure. This score is visualized as a “spider web” or radar chart containing seven different KPIs.

The speaker details the hierarchy of value within this algorithm:

  1. Sales: “Rank relatively high.”
  2. Reading Time: “Ranks really high.”
  3. Degree of Immersion: “Relatively high.”
  4. Page Views: “Ranks relatively low.”

This weighting system is the operational brain of the newspaper. It tells the journalist that selling a subscription or keeping a reader on the page for five minutes is significantly more valuable than getting ten people to click and bounce in ten seconds. The “Degree of Immersion” is particularly interesting; the speaker explains that this is a calculation of how long a “standard person” takes to read the story compared to the actual behavior (how far down they scrolled, how long they stayed). If a reader matches the expected reading time and scroll depth, immersion is high.

Visualizing Success for the Newsroom

To make this complex algorithm usable, Politiken created a visual language: the colored bar. The speaker shows that articles have a bar that ranges from white to green. “If it’s very green, that means it has six points on that engagement score.” This simplifies the data for the non-technical journalist. They don’t need to know the math behind the algorithm; they just need to know that “Green is Good.”

However, the system allows for a “deep dive” via the spider web visualization. The speaker notes that “one story might do really well on reading time, another story might do really well on page impressions.” The spider web allows the journalist to see the shape of their success. If the web stretches out towards the “Sales” axis, it was a commercial success. If it stretches toward “Reading Time,” it was an editorial engagement success. This nuance allows for different types of journalism to be validated. A short, breaking news update might not have high reading time, but it might have high traffic (page views). A long-form feature might have lower traffic but massive reading time. The spider web validates both, provided they “gain ground” in their respective corners.

Connecting Content to Commerce

A crucial element of this argument is the direct link between content and the business model. The system tracks “stars”—which represent actual sales. The speaker points out a specific article regarding menopause that generated “45 sales,” which is considered a “really huge amount.”

This level of granularity—knowing exactly which article triggered a credit card transaction—is powerful. It shifts the definition of a “good story” from one that is popular to one that is profitable and sustains the business. The speaker notes that “sales rank relatively high” in the algorithm, reinforcing that the ultimate goal of the content is to support the subscription model. This creates a feedback loop where journalists learn what kind of journalism people are willing to pay for, which is often distinct from what people are willing to click on for free.

The “Black Line” and Internal vs. External Views

The speaker introduces the concept of the “black line” on the engagement bar, which is visible only to the internal organization, not the users. This distinction is vital. While the organization is transparent internally, they do not want to signal to the public which stories are “winning,” as that could influence reader behavior (social proof).

The internal view also includes conversion data: “the four people who actually took out a subscription based on this article alone.” By highlighting these specific conversion numbers alongside the engagement score (0-6), the organization reinforces the composite nature of success. It’s not just about being popular; it’s about being sticky (reading time) and convincing (sales).

Calibration of the Metric

The speaker mentions a fascinating detail about the calibration of their metric: over a four-month period involving 5,000 articles, the average engagement was exactly 3.0. The speaker notes, “If it’s right in the middle, you probably calibrated it right.” This suggests a sophisticated understanding of data science. If the average was 5.0, the metric would be too easy; if it was 1.0, it would be too demoralizing. A 3.0 average means the scale effectively differentiates between average, poor, and exceptional content.

In conclusion, the second argument is that success in modern digital journalism is too complex for single metrics. It requires a custom, weighted algorithm that prioritizes retention (reading time) and revenue (sales) over raw reach, visualized in a way that allows journalists to see the “shape” of their article’s performance.


Operationalizing Data via Real-Time Feedback, AI Integration, and Strategic Gap Analysis

The third core argument focuses on the application of data. It is not enough to just measure; the data must drive action. This argument encompasses real-time optimization of live stories, the use of AI for classification versus human input for “User Needs,” and the use of data to identify strategic gaps in content production (e.g., the “health” vs. “politics” balance).

Real-Time Optimization: The “Reading Time and Depth” Tool

The speaker demonstrates a tool that allows for immediate, tactical intervention: the “Reading Time and Depth” overlay. This feature allows a journalist to look at a story while it is live and see exactly where readers are dropping off.

The tool uses a visual overlay on the text itself. “If I scroll down… the figures change and the color changes.” The speaker explains that the bar might turn yellow or red as you scroll down, indicating that “you only have 46% of your readers left.” This is actionable intelligence. It transforms data from a post-mortem report (which comes too late to change the outcome) into a live diagnostic tool.

The speaker explains the utility: “If you have really big photos in the middle of your text you might lose part of your audience. If you have a very long quote… maybe you should take it out.” This empowers the newsroom to edit the story after publication to save the engagement. It shifts the workflow from “publish and forget” to “publish, monitor, and optimize.” The speaker notes that if a story is underperforming, editors might test a new headline or, in severe cases, pull the story, re-evaluate the structure, and republish. This creates a dynamic relationship with the content.

AI vs. Human Intelligence: The “User Needs” Strategy

A significant portion of this argument is devoted to the interaction between Artificial Intelligence and human editorial judgment. The speaker explains that Politiken uses AI (specifically a tool called Magna) to automate the tedious task of topic classification (ICTC codes). This allows them to visualize their output (e.g., seeing that “politics” took up a massive amount of content).

However, the speaker makes a critical distinction regarding “User Needs”—tags like Fascinate Me, Update Me, Guide Me, Give Me Perspective. The speaker states, “We’ve decided not to use AI [for User Needs]… If you want to use user needs on a practical level, you need to use it for ideation… It should be on their backbone.”

This is a profound argument for cognitive training. If AI tags the stories, the metadata exists, but the journalist learns nothing. By forcing the journalists to tag the stories themselves (even though 15% fail to do so), the organization forces the writer to ask, “What is the purpose of this story?” before they write it. It internalizes the strategy. The speaker argues that this manual friction is necessary for cultural adoption. If the journalist doesn’t understand why a story is an “Update Me” versus a “Fascinate Me,” they cannot write better versions of those stories in the future.

Strategic Gap Analysis: Production vs. Performance

The final piece of the operational argument is using data to audit the editorial strategy. The speaker uses the dashboard to show a discrepancy between what the newsroom produces and what the audience wants (or pays for).

The speaker highlights a clear example: “We produced too much politics.” When looking at the sales view, the politics section was a “red island” (underperforming in sales despite high volume). Conversely, “Health content converts really well for us.” They found a category called “Medical Research” where they only wrote 8 articles, but the average sales per article were 8.6—a massive success rate.

This is data-driven editorial strategy in its purest form. It acts as a “subtle hint” to the editors: write more health stories, write fewer generic political updates. The speaker also highlights a gap in “User Needs,” noting that they produced a massive amount of “Update Me” stories, but “Guides” were far more engaging. The data empowers the International Editor to say to their team: “We are doing a lot of update me, maybe we should do more guides.”

Product-Level Analysis

This operational argument extends to the product containers themselves. The speaker discusses the specific monitoring of the E-paper versus the Website versus the App. They discovered that the E-paper (the digital PDF of the print edition) has a staggering “average reading time per user… of 25 minutes.” This insight—that people treat the PDF with the same reverence as the physical paper—prevents the organization from neglecting legacy formats. It ensures they don’t “forget their e-paper” in the rush to be digital-first. Conversely, they use data to identify struggles, such as the low adoption of the native news app despite its superior user experience (“It’s more inviolable… smoother”).

In conclusion, the third argument dictates that data is useless unless it changes behavior. Politiken achieves this by giving journalists live tools to save dying stories, forcing them to cognitively engage with “User Needs” rather than outsourcing it to AI, and using macro-data to realign their content production (less politics, more health/guides) with audience demand.

Leave a Comment