X to Introduce AI-Driven Feed With User Prompts
Elon Musk says X is moving to a “purely AI” recommendation algorithm that lets people steer their feeds with natural-language prompts—a shift that could reshape how social platforms personalize content. If X delivers on this plan, others may follow, bringing direct, prompt-based control into mainstream social media while raising thorny questions about transparency, bias, and who ultimately controls what users see.
What X Is Planning
Musk said the “purely AI” system could roll out as soon as November or December, with users able to change their feeds on the fly by asking Grok, X’s in-house chatbot. He also pledged to open-source the algorithm “every two weeks or so,” implying frequent code drops that could, in theory, allow outside scrutiny of how the feed works.
The comments came in response to X product head Nikita Bier, who framed the goal as widening personalization beyond one-size-fits-all recommendations:
“The goal for your X timeline is to get out of the mainstream algo and the political crusades and find your niche.”
How It Works Today—and What Could Change
Right now, X users can shape their timeline by unfollowing accounts, muting or blocking, filtering notifications, and tapping “show less often.” The company also recently added post boosting, letting users pay $50 to $1,000 to elevate individual posts.
The new approach would add a prompt layer on top of this—users could tell Grok what topics, tones, or sources they want to see more or less of, and the feed would supposedly adjust in real time. The crucial question: will those prompts genuinely drive the algorithm, or will the system still tilt toward outcomes that serve platform-level goals such as engagement and ad performance?
Inside X’s Recommendation System
When X open-sourced portions of its code, it outlined a multi-stage pipeline. A service called Home Mixer, built on a custom Scala framework named Product Mixer, assembles the For You timeline by pulling in candidate posts, scoring them with machine-learned models and heuristics, and filtering the results. Under the hood, X maintains a real-time interaction graph connecting users and content to surface relevant candidates.
For ranking, X says it uses a neural network with roughly 48 million parameters trained on signals like likes, retweets, and replies. The system evaluates thousands of features and assigns around 10 labels per post, each representing the probability of a specific type of engagement. Posts are then ordered by these predicted engagement scores. In short, “parameters” are the values the model learns during training; more parameters generally allow the model to capture more complex patterns.
Prompting could change which candidates are considered and how they’re ranked. But even with code releases, most users won’t be able to validate whether the model is honoring their prompts or over-weighting engagement proxies that the platform values. Model complexity, constant iteration, and opaque training data make independent verification difficult.
Bias, Transparency, and Regulation
Concerns about algorithmic bias predate Musk’s ownership. In 2021, Twitter’s own research warned that “by consistently ranking certain content higher, these algorithms may amplify some messages while reducing the visibility of others.” The finding underscored a structural risk: even neutral-sounding objectives can produce uneven amplification across topics, communities, or viewpoints.
Regulators have taken notice. In February 2025, French prosecutors opened an investigation into X after a lawmaker alleged that “biased algorithms” distorted the operation of an automated data-processing system. Separately, the European Commission requested internal documents about X’s algorithms as part of an ongoing Digital Services Act (DSA) probe.
Academic scrutiny has also intensified. A preprint from Timothy Graham (Queensland University of Technology) and Mark Andrejevic (Monash University) analyzed engagement metrics and suggested potential algorithmic bias benefiting Elon Musk’s personal account and several Republican-leaning accounts. However, they did not find strong evidence of bias when comparing Musk’s account to other high-profile users. The study observed that Republican accounts already tended to receive higher baseline views than Democratic accounts—indicating a preexisting pro-Republican skew even before Musk endorsed Trump. The researchers argue that such dynamics raise broader questions about how algorithm changes shape public discourse and whether platforms function as neutral carriers.
Will Prompts Really Empower Users?
On paper, promptable feeds promise a more transparent and controllable experience. In practice, trade-offs loom large:
- User intent vs. platform incentives: The system may still optimize for engagement, even when users ask for niche or less sensational content.
- Open-source vs. legibility: Frequent code releases are helpful, but the interplay of data, weights, heuristics, and rapid updates can make meaningful auditing hard.
- Filter bubbles vs. personalization: Giving users powerful prompts could deepen echo chambers if not balanced with healthy exposure to diverse viewpoints.
What to Watch Next
If X ships the “purely AI” feed on the stated timeline, key signals to watch include:
- How precisely prompts map to visible changes in the feed.
- The cadence and substance of open-source updates—and whether they enable outside replication or audits.
- Transparency around safety guardrails, political content handling, and paid post boosting within a prompt-driven system.
- Regulatory responses under frameworks like the DSA, particularly around explainability and systemic risk.
X’s experiment could mark the start of a new era where users talk directly to the algorithm. Whether that era delivers genuine control—or just a new interface on old incentives—will depend on how faithfully the system translates prompts into ranking, and how much scrutiny it can withstand.