سبوت ميديا – Spot Media

تابع كل اخبار الفن والمشاهير والتكنلوجيا والرياضة والعديد من المواضيع الاجتماعية والثقافية

Technology

Elon Musk’s Social Media Behavior Puts Twitter’s Research in Peril

TWO YEARS AGO, Twitter launched what is perhaps the tech industry’s most ambitious attempt at algorithmic transparency. Its researchers wrote papers showing that Twitter’s AI system for cropping images in tweets favored white faces and women, and that posts from the political right in several countries, including the US, UK, and France, received a bigger algorithmic boost than those from the left.

By early October last year, as Elon Musk faced a court deadline to complete his $44 billion acquisition of Twitter, the company’s newest research was almost ready. It showed that a machine-learning program incorrectly demoted some tweets mentioning any of 350 terms related to identity, politics, or sexuality, including “gay,” “Muslim,” and “deaf,” because a system intended to limit views of tweets slurring marginalized groups also impeded posts celebrating those communities. The finding—and a partial fix Twitter developed—could help other social platforms better use AI to police content. But would anyone ever get to read the research?

Musk had months earlier supported algorithmic transparency, saying he wanted to “open-source” Twitter’s content recommendation code. On the other hand, Musk had said he would reinstate popular accounts permanently banned for rule-breaking tweets. He also had mocked some of the same communities that Twitter’s researchers were seeking to protect and complained about an undefined “woke mind virus.” Additionally disconcerting, Musk’s AI scientists at Tesla generally have not published research.

Twitter’s AI ethics researchers ultimately decided their prospects were too murky under Musk to wait to get their study into an academic journal or even to finish writing a company blog post. So less than three weeks before Musk finally assumed ownership on October 27, they rushed the moderation bias study onto the open-access service Arxiv, where scholars post research that has not yet been peer reviewed.

“We were rightfully worried about what this leadership change would entail,” says Rumman Chowdhury, who was then engineering director on Twitter’s Machine Learning Ethics, Transparency, and Accountability group, known as META. “There’s a lot of ideology and misunderstanding about the kind of work ethics teams do as being part of some like, woke liberal agenda, versus actually being scientific work.”

 231 total views

هل كان المقال مفيداً ؟

اترك رد