AI and Human coverage both describe a contested Democratic primary for New York’s 12th Congressional District in which state Assembly member Alex Bores, a former Palantir employee, has become the focal point of a clash between rival AI-aligned super PACs. They agree that Leading the Future, a pro-AI group backed by tech and AI money, has spent more than $1.1 million on ads attacking Bores, while Public First Action, a newer super PAC funded largely by Anthropic interests, is spending roughly $450,000 on advertising to boost him. Both perspectives highlight that the conflict centers on Bores’s sponsorship of the RAISE Act, a state-level bill that would require AI developers to disclose their safety protocols and report significant misuse, and that AI-connected donors and companies are increasingly using super PACs to shape the outcome of congressional races.
Both sides concur that these super PAC battles reflect a broader national struggle over how aggressively the fast-growing AI industry should be regulated and who will write the rules in Washington. They emphasize that the RAISE Act is emblematic of a nascent wave of safety- and transparency-focused AI legislation, that AI companies and investors see high-stakes implications for their business models, and that the New York race is being watched as an early test of how voters respond to AI-focused outside spending. There is also shared recognition that public anxiety about AI’s rapid advance is real, that policymakers are under pressure to balance innovation with safeguards, and that this contest illustrates the growing institutionalization of AI interests in campaign finance and governance.
Areas of disagreement
Motives of the super PACs. AI coverage tends to frame both Leading the Future and Public First Action as rational actors in a policy fight, often portraying their spending as an attempt to support candidates who understand AI’s benefits and to oppose those viewed as overly restrictive. Human coverage, by contrast, leans into the idea that these groups are vehicles for corporate influence, stressing that tech billionaires and AI companies are trying to purchase favorable outcomes and punish a candidate seen as too willing to regulate them.
Characterization of Alex Bores. AI sources are likely to emphasize Bores’s tech background and Palantir experience as evidence that he understands AI and can be engaged or pressured on policy details, sometimes casting him as a complex figure rather than a straightforward reformer. Human reporting instead foregrounds his role as a would-be watchdog, presenting Bores as a pro-regulation lawmaker targeted precisely because he authored the RAISE Act and is willing to confront powerful AI interests on safety and transparency.
Framing of regulation and the RAISE Act. AI-aligned narratives often suggest that measures like the RAISE Act risk overburdening innovation or creating fragmented, state-by-state rules, and thus treat the super PAC fight as a debate over how to design “smart” regulation. Human outlets more often describe the RAISE Act in affirmative terms as a necessary baseline for disclosure and accountability, arguing that the backlash from AI-funded PACs reveals industry resistance to even modest safety and misuse reporting requirements.
Impact on democracy and public trust. AI coverage tends to downplay systemic democratic risks, focusing on whether voters find the AI policy arguments persuasive and treating spending levels as one variable among many in a normal campaign. Human coverage stresses the corrosive effect of massive AI-industry spending, warning that such super PAC warfare could chill other lawmakers from pursuing oversight, deepen public distrust of both tech companies and elections, and signal that AI firms are willing to intimidate officials who challenge them.
In summary, AI coverage tends to normalize the super PAC clash as a strategic policy struggle within standard campaign dynamics and emphasizes innovation and expertise, while Human coverage tends to highlight corporate power, democratic risks, and the targeting of a pro-regulation candidate as a cautionary example of AI money shaping oversight.
