Canada should reverse its “discriminatory” 5% tax on U.S. music streaming platforms, the Digital Media Association told the Canadian House International Trade Committee Thursday (see 2406040031). The new tax could result in increased costs for consumers and undermine investments in the Canadian music industry, CEO Graham Davies said in written testimony. DiMA membership includes Amazon Music, Apple Music, Pandora, Spotify and YouTube. He noted U.S. Treasury Secretary Janet Yellen and David Cohen, the U.S. ambassador to Canada, have “expressed strong opposition to any discriminatory taxes against U.S. firms.” Music streaming services distribute about 70% of money from consumers to music labels, music publishers and collective rights management organizations, while commercial Canadian radio stations likely pay less than 9% to music rightsholders, he said. DiMA wants to “continue supporting, and investing in Canadian music and culture,” Davies said. “We are concerned that implementation of this legislation jeopardizes the ability to meaningfully do so.”
The Cybersecurity and Infrastructure Security Agency should consider allowing state and local governments to voluntarily comply with new rules under a 2022 cyber incident reporting law, the National Association of Secretaries of State told CISA in comments due last week. CISA is finalizing rules for the Cyber Incident Reporting for Critical Infrastructure Act, with requirements for critical infrastructure owners and operators (see 2203160051). The agency posted comments through Thursday. NASS membership includes top state election officials from 40 states and territories, including Alabama, California, Colorado, Florida and New York. Its comments note that state and local election officials share cyber information with CISA on a "well-functioning,” voluntary basis. Industry groups asked CISA in the past for narrow rules and to avoid overly burdensome reporting requirements for companies (see 2211290071). NASS is “concerned” the proposed rules may “disincentivize” state and local officials from participating in their “well-functioning voluntary partnership.” It continued, “CISA should prioritize continuing to maintain this voluntary partnership over imposing requirements on SLTT government entities.” The proposed rules are “overly broad and would strain the resources of SLTT government entities during a critical time for cyber incident response.” The incident reports would require hours or staff time, which is “challenging for state government entities and potentially impossible for many small local jurisdictions,” NASS said.
An age-appropriate social media design bill that Pennsylvania lawmakers are considering is unenforceable because of its vague language about protecting children, House Children and Youth ranking member Barry Jozwiak (R) said Wednesday. The committee planned to vote on the Online Safety Protection Act (HB-1879) but postponed the motion over Jozwiak's technical objections. Introduced by Chair Donna Bullock (D), HB-1879 would require companies that design platforms a child will “likely" access do so with the “best interest” of children in mind. In addition, it would require age-appropriate design standards similar to provisions included in California’s enjoined social media design law (see 2311160059). Committee staff said Google supports the legislation in Pennsylvania. Google didn’t comment Wednesday. Jozwiak said he has received three pages of questions and concerns from Pennsylvania Attorney General Michelle Henry (D) about the bill’s “overly broad” terms and definitions. The measure is “essentially unenforceable” against entities that don’t gather “actual knowledge” of ages, and the AG lacks the resources to enforce it as written, he said. He formally filed to have the legislation tabled. That motion failed on a 14-11 party-line vote. Committee members had several weeks to file amendments and work with sponsors, Bullock said. Joziak argued consideration of the legislation would be out of order because a Bullock amendment was received at 1:22 p.m. Tuesday, and committee rules dictate that the deadline is 1 p.m. Bullock conferred with committee staff and ultimately tabled the bill. Her amendment would alter some language, including terms like “best interests of a child.” The amendment would extend the effective date of the legislation to December 2025.
Banning social media platforms from using algorithms to tailor content could expose children to harmful content, the Computer & Communications Industry Association said Wednesday. CCIA wrote a letter of opposition against a bill the New York State Legislature is considering. Introduced by Sen. Andrew Gounardes (D) and sponsored by Sen. Patricia Canzoneri-Fitzpatrick (R), the Stop Addictive Feeds Exploitation (SAFE) for Kids Act (S-7694A) would ban platforms from using algorithms that provide “addictive feeds” to minors. California is also looking into regulations for kids’ algorithms (see 2405210019). “These algorithms also protect teens from harmful content and help tailor online content for younger users,” CCIA said. “While the intent is to reduce the potential for excessive scrolling, eliminating algorithms could lead to a random assortment of content being delivered to users, potentially exposing them to inappropriate material.” Northeast Regional Policy Manager Alex Spyropoulos said there are “numerous tools” that prevent excessive screen time and content consumption without compromising protective measures. The New York bill’s age-verification provisions could also “increase the amount of sensitive information users must share with companies and potentially cut off users from the online communities they depend on,” CCIA said.
The Biden administration’s merger guidelines give agencies a license to challenge any transaction without the support of sound antitrust analysis, CTA said in a report released Wednesday (see 2312180069). The 2023 guidelines, issued by the FTC and DOJ, “make incorrect claims about the effects of competition on specific performance variables” and “focus on performance variables that are poor indicators of the effects of mergers on consumers and society,” CTA said. FTC Chair Lina Khan and DOJ Antitrust Division Chief Jonathan Kanter defended the administration’s antitrust enforcement approach during an event Tuesday (see 2406040065). CTA argued the guidelines let the agencies define dominant market positions in “whatever nebulous way” they choose while disregarding decades of economic literature. The guidelines' authors assume terms like “durable market power” and “persistence of market power” are “understood,” CTA said. The association cited examples of “nebulous guidance,” including the guidelines' claim that “the persistence of market power can indicate that entry barriers exist, that further entrenchment may tend to create a monopoly, and that there would be substantial benefits from the emergence of new competitive constraints or disruptions.” The agencies have not defined those terms, said CTA.
The Biden administration will complete a cyber pilot program in 2025 to better understand how it should harmonize cyber regulations, save money and improve cyber outcomes, National Cyber Director Harry Coker said Tuesday. In Aiugust, the Office of National Cyber Director issued a request for information about harmonizing regulation across federal agencies (see 2311030046). ONCD on Tuesday issued a summary of public feedback, which included comments from USTelecom, NCTA, CTIA, BSA | The Software Alliance and the U.S. Chamber of Commerce, as well as consumer groups like Consumer Reports and the Electronic Privacy Information Center. Many commenters said cyber compliance costs are forcing organizations to draw resources away from cybersecurity programs, Coker said Tuesday. A related issue is that international and state regulatory frameworks create inconsistencies and duplication, he said. Coker noted the Chamber of Commerce, the National Electrical Manufacturers Association and CTIA “suggested that Congress consider legislation to set national, high-level standards for cybersecurity.” ONCD expects it will complete a pilot program in 2025 that explores cyber reciprocity. The term refers to the federal government relying on internal and external organizations’ security assessments, which can reduce time, costs and resources when authorizing federal information technology systems. The pilot program will focus on a reciprocity framework “to be used in a critical infrastructure subsector,” said Coker. Commenters believe there’s a lack of regulatory harmonization and reciprocity, which impacts the competitiveness of businesses in “all sectors,” Coker noted. The pilot program will give ONCD “valuable insights as to how best to design a cybersecurity regulatory approach from the ground up,” he said.
TikTok last week denied a Reuters report that it's developing an operationally independent U.S. version of the social media application that Chinese parent ByteDance could sell to a non-Chinese owner. TikTok said it continues to maintain that it's “simply not possible” commercially, legally or technologically for ByteDance to divest the popular app, as a recently enacted U.S. law requires. The platform has asked the U.S. Court of Appeals for the District of Columbia Circuit to overturn the law, which will ban the app in the U.S. if it's not sold to an entity that isn’t controlled by a foreign adversary (see 2405070049). A group of TikTok content creators have also sued, claiming the law violates their First Amendment rights (see 2405160065).
Federal law enforcement agencies should be banned from using “racially discriminatory technologies” like facial recognition and predictive policing, consumer advocates wrote DOJ in comments due last week. The Center for American Progress, the Electronic Privacy Information Center, Fight for the Future, Free Press, the Lawyers’ Committee for Civil Rights Under Law and the National Association for the Advancement of Colored People signed joint comments in response to the National Institute of Justice’s query. NIJ is preparing a report on AI use in the criminal justice system, as President Joe Biden’s AI executive order directs. DOJ said it isn’t planning on publishing comments. Research shows facial recognition technology (FRT) and predictive policing tools are “racially discriminatory,” the groups said: Accordingly, authorities should ban these technologies “by default as [they are] presumptively discriminatory and in violation” of the Civil Rights Act. The advocates recommended limited, case-by-case waivers in which police can use the technology when there’s clear evidence it isn’t discriminatory, as well as audits before and after the technology is used. They said algorithmic surveillance tools, including drones and FRT, should be banned in public places or any setting where First Amendment rights could be chilled. Police often use these technologies when targeting racial justice protesters and activists, the advocates said: “Because of the acute threat that these technologies pose to First Amendment rights, their use for public surveillance should be prohibited.” In addition, they recommended law enforcement disclose the use of AI technology to defendants in criminal cases.
YouTube should stop helping the Russian government censor news and human rights organizations on the platform, Access Now and a coalition of Russian advocates wrote in a letter Tuesday. They claimed YouTube has blocked anti-war content from groups like Roskomsvoboda and journalists Ekaterina Kotrikadze and Tikhon Dzyadko. Access Now signed the letter with Roskomsvoboda, OVD-Info, Greenhouse of Social Technologies and Reporters Without Borders. YouTube unlocked at least three censored videos on the platform, but the content remains unavailable in Russia, they said. YouTube said in a statement Tuesday that it regularly reviews takedown requests from governments all over the world. If the company believes a government is trying to silence citizens, then YouTube pushes back, it said. “We defend these cases vigorously, and have previously even been fined for our non-compliance, including by the Russian government,” YouTube said. The platform “has remained available in Russia” throughout the war with Ukraine, and the company regularly removes channels and content from Russian-linked groups spreading false information about the conflict, YouTube said.
Scams on Facebook and Instagram comprised nearly 75% of social media-related fraud reported to the FTC in 2023, the agency said Friday. The FTC released data from reports to its Consumer Sentinel Network. Facebook accounted for 51% of fraud originating on social media, and Instagram accounted for 22%. Best Buy’s Geek Squad, Amazon and PayPal were the top three most-impersonated companies, according to the report. Microsoft ranked No. 4, Apple No. 7 and Comcast No. 8. Consumers reported losing $60 million to Microsoft impersonators, more than Geek Squad, Amazon and PayPal scams combined. Consumers reported losing $15 million to Geek Squad impersonators, $19 million to Amazon scammers and $16 million in PayPal-related scams.