BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//MediaFutures - ECPv6.15.13.1//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-ORIGINAL-URL:https://mediafutures.no
X-WR-CALDESC:Events for MediaFutures
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:Europe/Oslo
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20220327T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20221030T010000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20230326T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20231029T010000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20240331T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20241027T010000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=Europe/Oslo:20231130T091500
DTEND;TZID=Europe/Oslo:20231201T170000
DTSTAMP:20260407T113339
CREATED:20230530T100919Z
LAST-MODIFIED:20231129T103800Z
UID:15242-1701335700-1701450000@mediafutures.no
SUMMARY:Bergen-Boston Forum
DESCRIPTION:We are happy to invite to a two-day hybrid workshop about AI and the Future of Protest Politics: Politics and Emotions in the Age of Digital Transformation and Surveillance Capitalism. \nOur goal is to bring together prominent scholars from different disciplines to discuss the impact of AI-based platforms\, and their underlying technological and economic principles\, on political discourse and protest.  \nThe event will draw on the Boston-Bergen Forum on Digital Futures—an international research network among the ‘Culture\, Society & Politics’ and the ‘Practical Philosophy’ research groups at UiB’s Philosophy Department\, the Applied Ethics Center at UMass Boston\, and the MIT Program Human Rights and Technology.  \nMediaFutures center director Christoph Tratter is co-leading the project together with Professor Franz Knappik (Bergen)\, Dr. Christopher Senf (Bergen) and Professor Nir Eisikovist (UMass). \n\nWHERE: The Philosophy Department in Bergen\, Sydnesplassen 12/13\, seminarrommet i 1. etasje \nAs it is a hybrid event\, you can join the event in person or via zoom. \nRegister here\n			\n			\n				\n				\n				\n				\n			\n				\n				\n				\n				\n				\n				\n				\n				\n				\n				\n				\n\n\n\n\n1st Workshop Day\, Thursday\, November 30th\n\n\n\n09.15 am\nWelcome Note\n\n\n\nSection I:     Making AI Intelligible\n\n\n\n09.30 am\nKeynote by Herman Cappelen (Hong Kong\, online):“AI and The Commodification of Meaning”\n\n\n10.15 am\nQ&A (Moderation\, Jesse Tomalty)\n\n\n11.00 am\nRosalie Waelen (Bonn):“The Struggle for Recognition and AI’s Impact on Self-development”\n\n\n11.30 am\nQ&A (Moderation\, Chris Senf)\n\n\n12.30 am\nLunch break at Café Christie\n\n\nSection II:     Contesting the Attention Economy\n\n\n13.45 pm\nSebastian Watzl (Oslo):“What is Wrong with How Attention is Commodified?”\n\n\n14.30 pm\n James Williams: “tba” \n\n\n15.00 pm\nQ&A (Moderation\, Alec Stubbs)\n\n\nSection III:    Algorithms of (In)justice\n\n\n15.45 pm\nKjetil Rommetveit (Bergen):“(How) Can you code rights and morality into digital infrastructures and AIs?”\n\n\n16.15 pm\nAlec Stubbs (UMass\, Boston):“Generative AI and the future of Work”\n\n\n16:45 pm\nQ&A (Moderation\, Carlota Salvador Megias)\n\n\n17.30 pm\nEnd\n\n\n\n			\n			\n				\n				\n				\n				\n				\n				\n				\n				\n				\n				\n\n\n\n\n2nd Workshop Day\, Friday\, December 1st\n\n\n\n10.00 am\nWelcome Note\n\n\n\nSection I:     Political Technologies\n\n\n\n10.15 am\n Eugenia Stamboliev (Vienna):“Protesting the Classification of Emotions (affects) and its Technological means”\n\n\n10.45 am\nJames Hughes (UMass\, Boston):“Communication Technologies: Hegemonic\, Radicalizing and Democratic”\n\n\n11.15 am\n Q&A (Moderation\, Alec Stubbs)\n\n\n12.15 am\nLunch break at Café Christie\n\n\nSection II:     Future of Protest Movements\n\n\n13:30 pm\n Paul Raekstad (Amsterdam):“Domination Without Dominators: The Impersonal Causes of Oppression”\n\n\n14:00 pm\nChristopher Senf (Bergen):“Algorithmic Exploitation of Recognition”\n\n\n14.30 pm\nQ&A (Moderation\, Ane Engelstad)\n\n\n15.15 pm\nCoffee break\n\n\nSection III     Activism and Philosophy in the Age of AI\n\n\n15.30 pm\nMaria Brincker (UMass Boston\, online):“What kind of space is a ‘platform’ with its own goals?”\n\n\n16.00 pm\nKade Crockford (ACLU Massachusetts & MIT Media Lab\, online):“All Politics is Local: Fighting Face Surveillance from the Ground Up in Massachusetts”\n\n\n16.30 pm\nQ&A (Moderation\, Chris Senf)\n\n\n17:30 pm\nEnd\n\n\n\n			\n			\n				\n				\n				\n				\n			\n				\n				\n				\n				\n				\n				\n				\n				\n				\n				\n				Abstracts: \n1)   Herman Wright Cappelen (Hong Kong) \n“AI and the Commodification of Meaning” \nAI systems\, owned by private corporations\, will soon have the ability to control the meaning of the sentences we speak and interpret. This can be seen as a form of commodification of speech act content\, a more serious form of commodification than e.g.\, artistic commodification. The determination of meaning by AI raises concerns about corporate control over language\, reminiscent of Orwellian scenarios. Often\, the goals behind these communicative exchanges will be foreign to individuals\, who may not endorse or even be aware of them. The result is a form of meaning alienation.  \n2)   Rosalie Waelen (Bonn)  \n“The struggle for recognition and AI’s impact on self-development” \nCritical theories\, with their focus on power dynamics and emancipation\, offer a valuable basis for the analysis of AI’s social and political impact. Axel Honneth’s theory of recognition is one such critical theory. Honneth’s theory of recognition adds to the present AI ethics debate\, because it shines light on the different ways in which AI reinforces or exacerbates struggles for recognition. Moreover\, through the lens of Honneth’s theory of recognition\, one learns how AI can harm people’s self-development. This presentation highlights some of those contemporary struggles for recognition and their (potential) impact on people´s self-development.   \n3)   Sebastian Watzl (UiO Oslo) \n“What is Wrong with How Attention is Commodified?” \nOur attention is commodified: it is bought and sold in market transactions when individuals lend out the ability to control their attentional capacities in exchange (for example) for technological services. What is wrong with that? Attention markets\, we argue\, resemble labor markets. By drawing on the ethics of commodification and core features of attention\, we show that attention markets\, while not always morally wrong\, carry special moral risks: because of how attention shapes beliefs and desires\, subjective experience and action\, they are prone to be disrespectful\, alienating\, and provide fertile grounds for domination. Our analysis calls for regulatory interventions.  \n4)   James Williams  \ntba   \n5)   Kjetil Rommetveit (UiB Bergen) \n“(How) Can you code rights and morality into digital infrastructures and AIs?”     \nIn 1980 philosopher of technology Langdon Winner famously asked ‘Do Artifacts Have Politics?’ This question was followed up by Latour’s (1994) and Verbeek’s (2008) analyses of technological mediation of morality. Whereas these questions were once provocative\, in recent AI regulations they have become part of official governance mechanisms. In this talk I present some novel approaches to governance in the EU through\, specifically the risk-based approach and the design-based approach. Situating these within a wider techno-regulatory imaginary\, I provide examples of how these instruments play out in practice. I end on some critical questions: what kind of politics do in-built morality have? And what implications can be discerned for critical publics?    \n6)   Alec Stubbs (UMass Boston)  \n“Generative AI and the Future of Work” \nThis talk intertwines André Gorz’s post-work philosophy with Herbert Marcuse’s critical theory to envision a democratized future in which generative AI serves the productive aims of society. The talk evaluates the pitfalls of generative AI in reshaping labor\, including the likelihood of technological unemployment\, downward pressure on wages\, and deskilling of workers. The discussion also evaluates the potential of generative AI in reshaping labor\, emphasizing the need for a demand for the reduction of the workweek in leftist politics and labor struggles. Central to the argument is Gorz’s imperative to redefine work’s role in a technologically advanced\, equitable society.   \n7)   Eugenia Stamboliev (Vienna) \n“Protesting the classification of emotions (affects) and its technological means”  \nTo critique affect technology\, we need to politicize emotionality and affectivity newly. Today\, we are witnessing the emergence of intrusive algorithmic technologies\, such as AI\, in our daily lives. These technologies\, designed to measure and control lives\, information\, and data\, are intended to nudge and influence our political moods and public sentiments as much as they are to measure expressions and emotions. In this talk\, I will discuss the history of two types of “affect technologies” (AT) and offer some criticism on their goals and applications. First\, ATs intended to measure and classify emotions and affection emerged from the cognitive turn in computer studies. While popular\, these ATs are normatively problematic and flawed\, but still influence the design and economic models underlying many recognition systems. Second\, ATs expected to drive\, manage\, and influence political beliefs and public moods are underlining architectures that do more than manage emotions via technological means\, but they are part of the devaluation of emotions through political campaigning. Protesting the shortcomings of ATs\, means calling into question both the normative and political agendas underlying affect technologies\, as well as offering new and positive approaches on affectivity that are beyond the scope of measurement and control\, but remain politically crucial for democratic protest while avoiding commercial and technical exploitation.   \n8)   James J. Hughes (UMass Boston & IEET) \n“Communication Technologies: Hegemonic\, Radicalizing and Democratic” \nBooks\, radio and television all transformed political mobilization\, by both elites and radicals. How different is the Internet\, social media and algorithmically driven communication? Are we more likely to form radical sub-communities\, each with its own reality (e.g. MAGA)? Can we envision democratic countervailing institutions emerging from the “commodification\, outrageification\, and gamification of protest” by platform companies? Will the algorithmic rules and required moderation included in the EU AI Act\, DMA and DSA reduce ideological hegemony\, improve collaboration and decrease toxicity in these environments?   \n9)   Paul Raekstad (Amsterdam) \n“Domination Without Dominators: The Impersonal Causes of Oppression” \nSocial movements of the last centuries have been naming and analyzing the complex forms of personal and impersonal domination that they fight to overcome. Yet current theories of domination have largely been unable to make sense of the latter. Theories of domination as being subject to the will\, or arbitrary power\, of another rule them out\, while extant theories of impersonal domination are often unsystematic or narrowly focused. My paper tries to remedy this by developing a systematic theory of impersonal domination\, distinguish some important types thereof\, and show why it matters for universal human emancipation.    \n10)  Christopher Senf (UiB Bergen) \n“Algorithmic Exploitation of Recognition” \ntba   \n11)  Maria Brincker (UMass Boston) \n“What kind of space is a ‘platform’ with its own goals?” \nHow are we to understand our political actions on surveillance and algorithm-driven for-profit platforms? Current social media platforms present users with possibilities of building vast networks and achieving massive\, fast reach to highly dispersed groups. Hence\, they present incredible opportunities for expanded agency\, organizing\, and information sharing. However\, these platform ecosystems also present users with highly unusual affordance spaces\, which might pose challenges to our agency. Proprietary algorithms\, vast data harvesting and camouflaged behavior modification tools are used to drive platform company interests – often conflicting with those of users. We engage in political movements to shape the future\, but how do our actions on these platforms in fact shape our future and our extra-situational spaces?   \n12)  Kade Crockford (ACLU & MIT) \n“All Politics is Local: Fighting Face Surveillance from the Ground Up in Massachusetts” \nIn 2019\, the ACLU of Massachusetts launched a campaign to bring democratic control over government use of facial surveillance technology. Over the following two years\, we passed eight bans on government use of face surveillance in cities and towns across the state\, including in Massachusetts’ four largest cities: Boston\, Cambridge\, Springfield\, and Worcester. We also passed a state law creating some regulations on police use of the technology statewide. During this talk\, campaign leader Kade Crockford will discuss how the ACLU’s campaigners dreamed big\, built a coalition\, and fought from the ground up to defeat the narrative of technological determinism\, and how you can do it\, too.
URL:https://mediafutures.no/event/bergen-boston-forum-2023-2024/
LOCATION:Philosophy Department of the University of Bergen\, Norway
CATEGORIES:Events
ATTACH;FMTTYPE=image/png:https://mediafutures.no/wp-content/uploads/Frame-1-2.png
END:VEVENT
END:VCALENDAR