An AI computing center will be built in upstate New York as part of a $400 million plan to bring jobs to the region, increase tech sector innovation and promote AI for the public good, Gov. Kathy Hochul (D) announced Monday. Seven founding entities will lead the Empire AI consortium: Columbia University, Cornell University, New York University, Rensselaer Polytechnic Institute, the State University of New York, the City University of New York and the Simons Foundation. The state will contribute $275 million in funding, and the founding and private partners will contribute $125 million.
Maryland will establish a government committee to develop a comprehensive plan for AI and agencies’ use of the technology, Gov. Wes Moore (D) announced with an executive order Monday. The AI subcabinet will establish guardrails for government use of AI, his office said. Moore also announced creation of the Maryland Cybersecurity Task Force. The task force will bring together officials from the state's Department of Information Technology, the Military Department and Department of Emergency Management for coordination with the Governor’s Office of Homeland Security to establish cross-agency objectives on cybersecurity.
FTC and DOJ enforcers should investigate Big Tech’s dominance of AI, a group of more than 20 organizations wrote the agencies Wednesday. “We respectfully urge your offices to investigate Big Tech’s concentration in the AI space and to take appropriate action to enforce our antitrust laws,” they said. American Economic Liberties Project, the Center for Digital Democracy, Demand Progress, the Open Markets Institute and Public Citizen signed. They cited the number of AI startup-related purchases Meta, Apple, Google, Microsoft and Amazon have made in recent years. Apple has acquired 32 AI startups since 2010, 21 of those since 2017, the letter said. Since 2010, Google has purchased 21, Meta 18, Microsoft 17 and Amazon 10. These same companies have invested substantially in leading AI companies or shown interest, they added. This includes Microsoft’s $13 billion investment in OpenAI. The letter noted Microsoft is joining OpenAI's board as a nonvoting observer.
The FTC and DOJ filed 50 merger enforcement actions in fiscal 2022, marking the highest total since the 55 actions in 2001, the agencies said Thursday in the annual Hart-Scott-Rodino Report. Combining parties abandoned seven deals in 2022, nearly 2 percentage points higher than the 5.4% abandonment average over the past 10 years, the agencies said. The FTC filed six litigation complaints in 2022, nearly doubling the 3.2 average during the previous decade. “These enforcement actions preserved competition in numerous sectors of the economy, including consumer goods and services, pharmaceuticals, healthcare, high tech and industrial goods, and energy,” FTC Chair Lina Khan said in a joint statement with Commissioners Alvaro Bedoya and Rebecca Kelly Slaughter. The telecommunications sector represented 0.7% of the 3,029 deals reported in 2022. ISPs, web search portals and data processing services accounted for 3.6% of the agreements.
The FTC on Wednesday unveiled proposed changes to children’s privacy law rules, including more stringent requirements for obtaining parental consent and limits on how platforms can monetize children’s data. The agency issued an NPRM seeking comment on potential changes to the Children’s Online Privacy Protection Rule. The changes would require platforms and apps to “obtain separate verifiable parental consent to disclose information to third parties including third-party advertisers -- unless the disclosure is integral to the nature of the website or online service.” The agency would ban websites from “collecting more personal information than is reasonably necessary for a child to participate in a game, offering of a prize, or another activity.” In addition, it would prohibit operators from “using online contact information and persistent identifiers collected under COPPA’s multiple contact and support for the internal operations exceptions to send push notifications to children to prompt or encourage them to use their service more.” The agency is considering specifying that personal information can be retained “only for as long as necessary to fulfill the specific purpose for which it was collected.” The commission voted 3-0 to issue the NPRM. The public will have 60 days to comment after the notice's Federal Register publication. “Kids must be able to play and learn online without being endlessly tracked by companies looking to hoard and monetize their personal data,” FTC Chair Lina Khan said in a statement. “The proposed changes to COPPA are much-needed, especially in an era where online tools are essential for navigating daily life -- and where firms are deploying increasingly sophisticated digital tools to surveil children.” In a statement Wednesday, Sens. Ed Markey, D-Mass., and Bill Cassidy, R-La., said the FTC proposal is “critical to modernizing online privacy protections” but shouldn’t be seen as a replacement for legislation. Markey and Cassidy wrote legislation updating children’s privacy law (see 2303220064).
Three porn sites were designated very large online platforms (VLOPs) under the EU Digital Services Act (DSA), the European Commission said Wednesday. The commission concluded that Canadian-owned Pornhub, Cyprus-registered Stripchat and Czechia-registered XVideos meet the DSA threshold of 45 million average monthly users in the EU. As a result, the companies must begin to analyze the systemic risks they pose when their platforms disseminate illegal content and content that threatens fundamental rights; take steps to address these risks; design their services to avoid risks to children's well-being; and comply with transparency and accountability rules. The EC designated 19 VLOPs in April, and reported in November that while the platforms were making a good effort to comply with the DSA, much more needed to be done (see 2311100001). The EC opened an investigation Tuesday into whether X, a VLOP, violated the DSA. It was the first DSA investigation (see 2312180004). By Feb. 17, all platforms except small and microenterprises must comply with the act.
The European Commission is investigating whether X breached the Digital Services Act, it said Monday. X didn't immediately comment. It is the first time the EC has opened proceedings under the DSA. X is one of 19 companies the DSA classifies as "very large online platforms" (VLOPs). The VLOPs are required to analyze systemic risks they create from dissemination of illegal content or the harmful effects such content has on fundamental rights (see 2311100001). On the basis of a preliminary investigation, including X's first risk report, transparency report and replies to a formal request for information, the EC said the company may have violated the DSA "in areas linked to risk management, content moderation, dark patterns, advertising transparency and data access for researchers." It launched formal infringement proceedings that will focus on: (1) Compliance with DSA obligations related to countering dissemination of illegal content in the EU. (2) The effectiveness of measures taken to combat information manipulation on the platform, particularly X's "so-called 'Community Notes' system" in the EU. (3) The measures X took to increase its platform's transparency. (4) A suspected deceptive design of the user interface particularly related to checkmarks linked to certain subscription products (the Blue checks). The EC sent an "important signal today" to show that it wants the DSA to change the business models of VLOPs, an EC official said at a briefing. The launch of the inquiry doesn't mean X has breached the DSA, just that the EC has significant grounds to investigate, the official said. Illegal content in the EU is a key area of concern, the official said: X's notification system might not comply with the DSA, and some of its risk assessments for the EU aren't sufficiently detailed, especially in the area of languages monitored for illegal content. Some of the company's mitigation techniques are very broadly defined and may not be effective in combating illegal content such as graphic violence in connection with the Israel-Hamas conflict, the official added. In addition, the way X deals with disinformation relies on a combination of different systems, including blue checks, which may mislead users into believing the checks indicate more trustworthy content, she said. The EC has had strong engagement from all the VLOPs, but it's a "glass half full" because it's unclear whether the serious engagement is enough to mitigate risks, the official said. Asked for a definition of what the EC considers illegal content, the official said the DSA isn't a content moderation rule but an approach to deal with systemic risks and to assess what VLOPs do when they're notified of such content on their sites. The same goes for disinformation, the official said. The EC received examples of material national media authorities flagged, which were sent to X; however, the company did not address them, a second official noted: These include depictions of violent crimes and visible wounds. X's policies forbid publication of this content, but they appear to be available on its site. The EC will continue gathering evidence and, if it finds noncompliance, could impose interim measures, accept commitments from X to remedy the problems or make an infringement decision.
The FTC will “closely monitor” generative AI for enforcement opportunities to protect competition and consumers, agency staff said in a report issued Monday. Staff offered takeaways from an October roundtable where creative professionals discussed AI's benefits and risks. Their concerns touched on data collection without consent, undisclosed use of work, competition from AI-generated creators, AI-driven mimicry and false endorsements. “Although many of the concerns raised at the event lay beyond the scope of the Commission’s jurisdiction, targeted enforcement under the FTC’s existing authority in AI-related markets can help protect fair competition and prevent unfair or deceptive acts or practices,” the agency said.
EU privacy law doesn't need tweaking at present, the European Data Protection Board said in response to a European Commission report on how well the general data protection regulation (GDPR) is working. Following its Dec. 15 plenary, the board said it "considers that the application of the GDPR in the first 5 and a half years has been successful. While a number of important challenges lie ahead, the EDPB considers it premature to revise the GDPR at this point in time." It urged the European Parliament and Council to quickly approve procedural rules relating to cross-border enforcement of the measure. Moreover, it said, national data protection authorities and the board need sufficient resources to continue carrying out their duties. The EDPB said it's convinced that existing tools in the GDPR will lead to a "common data protection culture" if they're used in a harmonized way. In a Q&A with Communications Daily, European Data Protection Supervisor Wojciech Wiewiorowski said he expects discussions about changes to the GDPR to begin in 2025 to deal with AI, among other items (see 2312010002).
NTIA will launch a public inquiry into “the risks and benefits of openness of AI models and their components,” the agency said Thursday. Administrator Alan Davidson will announce the launch during an event Wednesday with experts from the Center for Democracy & Technology, GitHub, Princeton University and the Centre for the Governance of AI attending.