Publications

Publications by Seattle University faculty on technology ethics or science and technology studies.

Explore publications written by Seattle University faculty across a diverse mix of disciplines and departments, covering topics from game design to public health to phenomenology.

2024

To Err is Human: Bias Salience Can Help Overcome Resistance to Medical AI
Mathew S. Isaac (Seattle University), Rebecca Jen-Hui Wang, Lucy E. Napper and Jessecae Marsh (Lehigh University)
Computers in Human Behavior
2024

Abstract

Prior research has shown that many individuals exhibit an aversion to algorithms and are resistant to the use of artificial intelligence (AI) in healthcare. In the present research, we show that an intervention that increases the salience of bias in decision making—either in general or specifically with respect to gender or age—makes individuals relatively more receptive to medical AI. This increased receptiveness to AI occurs because bias is perceived to be a fundamentally human shortcoming. As such, when the prospect of bias is made salient, perceptions of AI integrity—defined as the perceived fairness and trustworthiness of an AI agent relative to a human counterpart—are enhanced.

Debating claims of fact in public health: A pedagogical activity
Julie H. Crowe
Qualitative Research in Medicine and Healthcare
April 9, 2024
https://doi.org/10.4081/qrmh.2024.11690

Abstract

This pedagogical activity asks instructors or workshop administrators to guide students through the process of evaluating evidence used to support health misinformation. In learning principles from argumentation and debate, students are asked to develop cases to refute or defend a factual claim about health, construct oral and written arguments for their cases, and share them with other students who will evaluate the strength and quality of evidence used by each side. Ultimately, students will learn how to: i) Understand how arguments are constructed that both support and refute a health claim; ii) evaluate evidence used for both sides of a claim of fact; and iii) identify health misinformation, particularly in an online context.

Ethics: Crisis Standards of Care Simulation
Diane Fuller Switzer, DNP, ARNP, RN, CEN, CCRN, FNP/ENP-BC, FNP-BC, FAEN, FAANP
Suzan Griffis Knowles, DNP, MD, RN-BC
Advanced Emergency Nursing Journal
January/March 2024
https://doi.org/10.1097/TME.0000000000000498

Abstract

Ethical dilemmas exist with decision-making regarding resource allocations, such as critical care, ventilators and other critical equipment, and pharmaceuticals during pandemics. Triage artificial intelligence (AI) algorithms based on prognostication tools exist to guide these decisions; however, implicit bias may affect the decision-making process leading to deviation from the algorithm recommendations. Conflict within the ethical domain may be affected as well. A knowledge gap was identified within the Adult-Gerontology Acute Care Nurse Practitioner (AG-ACNP) curriculum regarding ethics in crisis standards of care (CSC) medical decision-making. Incorporating a CSC simulation looked to address this knowledge gap. A simulation-based learning (SBL) experience was designed as a critical access setting where CSC are in place and three diverse, medically complex patients in need of critical care present to the hospital where one critical care bed remains open. Given the complexity of the simulation scenario, a table-top pilot test was selected. Three AG-ACNP fourth-quarter students in their critical care rotation volunteered for the pilot test. Students were provided with the topic, “ethics crisis standards of care” and the article, “A catalogue of tools and variables from crisis and routine care to support decision-making during pandemics” by M. Cardona et al. (2021), to read in advance. Students were provided with the triage AI algorithm (M. Cardona et al., 2021) utilizing prognostication tools to prioritize which patient requires the critical care bed. The expectation was that implicit bias would enter the decision-making process, causing deviation from the triage AI algorithm and moral distress. The debriefing session revealed that students deviated from the triage AI algorithm, experienced implicit bias, moral distress, and utilized clinical judgment and experience to care for all three patients. The pilot test results support that a CSC SBL experience addresses a critical knowledge gap in AG-ACNP education and an SBL experience incorporating ethical decision-making curriculum with standardized patients should be developed and trialed as the next step.

Optimizing Play: Why Theorycrafting Breaks Games and How to Fix It
Christopher A. Paul
MIT Press
2024
https://mitpress.mit.edu/9780262547789/optimizing-play/

Description

An unexpected take on how games work, what the stakes are for them, and how game designers can avoid the traps of optimization.

The process of optimization in games seems like a good thing—who wouldn't want to find the most efficient way to play and win? As Christopher Paul argues in Optimizing Play, however, optimization can sometimes risk a tragedy of the commons, where actions that are good for individuals jeopardize the overall state of the game for everyone else. As he explains, players inadvertently limit play as they theorycraft, seeking optimal choices. The process of developing a meta, or the most effective tactic available, structures decision making, causing play to stagnate. A “stale” meta then creates a perception that a game is solved and may lead players to turn away from the game. Drawing on insights from game studies, rhetoric, the history of science, ecology, and game theory literature, Paul explores the problem of optimization in a range of video games, including Overwatch, FIFA/EA Sports FC, NBA 2K, Clash Royale, World of Warcraft, and League of Legends. He also pulls extensively from data analytics in sports, where the problem has progressed further and is even more intractable than it is in video games, given the money sports teams invest to find an edge. Finally, Paul offers concrete and specific suggestions for how games can be developed to avoid the trap set by optimization run amok.

First Contact
Eric R. Severson

Research in Phenomenology
July 8, 2024

https://brill.com/view/journals/rip/54/2/article-p267_7.xml

2023

The promises and challenges of addressing artificial intelligence with human rights
Onur Bakiner
Big Data & Society
2023

https://doi.org/10.1177/20539517231205476

Abstract

This paper examines the potential promises and limitations of the human rights framework in the age of AI. It addresses the question: what, if anything, makes human rights well suited to face the challenges arising from new and emerging technologies like AI? It argues that the historical evolution of human rights as a series of legal norms and concrete practices has made it well placed to address AI-related challenges. The human rights framework should be understood comprehensively as a combination of legal remedies, moral justification, and political analysis that inform one another. Over time, the framework has evolved in ways that accommodate the balancing of contending rights claims, using multiple ex ante and ex post facto mechanisms, involving government and/or business actors, and in situations of diffuse responsibility that may or may not result from malicious intent. However, the widespread adoption of AI technologies pushes the moral, sociological, and political boundaries of the human rights framework in other ways. AI reproduces long-term, structural problems going beyond issue-by-issue regulation, is embedded within economic structures that produce cumulative negative effects, and introduces additional challenges that require a discussion about the relationship between human rights and science & technology. Some of the reasons for why AI produces problematic outcomes are deep rooted in technical intricacies that human rights practitioners should be more willing than before to get involved in.

Pluralistic sociotechnical imaginaries in Artificial Intelligence (AI) law: the case of the European Union’s AI Act
Onur Bakiner
Law, Innovation and Technology
2023
https://doi.org/10.1080/17579961.2023.2245675

Abstract

This paper asks how lawmakers and other stakeholders envision the potential benefits and challenges arising from Artificial Intelligence (AI). A close reading of the European Union's draft AI Act, a bill proposed by the European Commission in April 2021, and of 302 response papers submitted by NGOs, businesses and business associations, trade unions, academics, public authorities, and citizens, shows that pluralistic sociotechnical imaginaries contest: (1) the essential characteristics of technology as they relate to society, politics, and law; (2) whether, how and how much law can enable, direct or constrain scientific & technological developments; and (3) the degree to which law does and should intervene into scientific & technological controversies. The feedback from stakeholders reveals major disagreements with the lawmakers in terms of how the relevant characteristics of AI should influence legal regulation, what the desired law should look like, and whether and how the law should intervene into expert debates in AI. What is more, different types of stakeholders diverge considerably in what they problematise and how they do so.

What do academics say about artificial intelligence ethics? An overview of the scholarship
Onur Bakiner
AI Ethics
2023
https://doi.org/10.1007/s43681-022-00182-4

Abstract

This paper presents an overview of the academic scholarship in artificial intelligence (AI) ethics. The goal is to assess whether the academic scholarship on AI ethics constitutes a coherent field, with shared concepts and meanings, philosophical underpinnings, and citations. The data for this paper consist of the content of 221 peer-reviewed AI ethics articles published in the fields of medicine, law, science and engineering, and business and marketing. The bulk of the analysis consists of quantitative descriptions of the terms mentioned in each article. In addition, each term’s associations are analyzed to understand the specific meaning attached to each term. The analysis of the content is complemented by a social network analysis of cited authors. The findings suggest that some concepts, problem definitions and suggested solutions in the literature converge, but their content and meaning drive considerable variation across disciplines. Thus, there is limited support for the notion that shared concepts and meanings exist in the AI ethics literature. The field appears united in what it excludes: labor exploitation, poverty, global inequality, and gender inequality are not prominently mentioned as problems. The findings also show that the philosophical underpinnings of this academic field should be rethought: only a small number of texts mentions any major philosophical tradition or concept. Moreover, the field has very few shared citations. Most of the scholarship has been developed in relative isolation from others conducting similar research. Thus, it may be premature to talk about an AI ethics canon or a coherent field.

AI developers, associations, and the academic community
Mark Chinen
The International Governance of Artificial Intelligence
2023
https://www.elgaronline.com/monochap/book/9781800379220/book-part-9781800379220-11.xml

Abstract

Artificial intelligence developers, the professional associations that represent them, and academic institutions participate in the development and implementation of norms for artificial intelligence governance. Developers of artificial intelligence applications are in a unique position to make technical decisions that have broader impacts. Professional associations in which AI developers, academics and companies participate have recognized the need for the governance of artificial intelligence and have responded by developing ethical guidelines for AI and by participating in the development of other forms of governance at the international level. The university, part of whose mission is to engage in basic research, has also responded by establishing centers for the study of AI governance and is itself responsible for AI norms that have had some influence. Individual academics engage in international collaborations and provide their expertise to international governing bodies. At the same time, these actors are often in close relationships with private firms, often as employees, consultants, or recipients of funding. Finally, the norm of openness and the practice of developing technical tools detect and mitigate harms might not stem solely from these actors, but they have been championed by them. Openness seems to be well established through cross-border research collaborations, even in the face of national security concerns and by the adoption of open source practices at the international level.

Trademark Extraterritoriality: Abitron v. Hetronic Doesn’t Go the Distance
Margaret Chon and Christine Haight Farley
July 18, 2023
Technology & Marketing Law Blog
https://blog.ericgoldman.org/archives/2023/07/trademark-extraterritoriality-abitron-v-hetronic-doesnt-go-the-distance-guest-blog-post.htm

 

A Dialogue on Un/Precedented Pandemic Rhetorics
R. Mitchell, J.H. Crowe, S. DiCaglio, L. DeTora, B. Fitzsimmons, T.B. Hooker, L. Keränen, M. Klein, M. Nicolas, and S. Sastry
Rhetoric of Health & Medicine
2023
https://muse.jhu.edu/pub/227/article/917408

Abstract

Inspired by conversations at the 2021 Rhetoric Society of America Institute workshop on Pandemic Rhetoric(s), this dialogue assembles graduate student, early-, mid-career, and established rhetoric of health and medicine (RHM) and critical health communication scholars to discuss a keyword that has structured political, social, and biomedical thinking about COVID-19: un/precedented. In identifying un/precedented as an organizing temporal rhetoric for the pandemic, we interrogate how recurrent appeals to the pandemic’s novelty both allow for and limit our capacities to meet the pandemic’s tremendous exigencies head-on. Leveraging our unique scholarly and community commitments, we theorize how un/precedentedness 1) becomes complicit in government inaction, 2) (re)asserts conceptual and literal borders, 3) justifies state and national public health mandates, and 4) obscures other historical and contemporary pandemics. We conclude by offering possibilities for interdisciplinary and longitudinal research into the far-reaching effects of contagious disease.

The Celluloid Specimen: Moving Image Research into Animal Life
Benjamin Schultz-Figueroa
University of California Press
2023
https://www.ucpress.edu/book/9780520342347/the-celluloid-specimen

Description

In The Celluloid Specimen, Benjamín Schultz‑Figueroa examines rarely seen behaviorist films of animal experiments from the 1930s and 1940s. These laboratory recordings—including Robert Yerkes's work with North American primate colonies, Yale University's rat‑based simulations of human society, and B. F. Skinner's promotions for pigeon‑guided missiles—have long been considered passive records of scientific research. In Schultz‑Figueroa's incisive analysis, however, they are revealed to be rich historical, political, and aesthetic texts that played a crucial role in American scientific and cultural history—and remain foundational to contemporary conceptions of species, race, identity, and society.

2022

Is explainable artificial intelligence intrinsically valuable
Nathan Colaner
AI & Soc
2022
https://doi.org/10.1007/s00146-021-01184-2

Abstract

There is significant divergence when we try to articulate why, exactly, it is desirable. This question must be distinguished from two other kinds of questions asked in the XAI literature that are sometimes asked and addressed simultaneously. The first and most obvious is the ‘how’ question—some version of: ‘how do we develop technical strategies to achieve XAI?’ Another question is specifying what kind of explanation is worth having in the first place. As difficult and important as the challenges are in answering these questions, they are distinct from a third question: why do we want XAI at all? There is vast literature on this question as well, but I wish to explore a different kind of answer. The most obvious way to answer this question is by describing a desirable outcome that would likely be achieved with the right kind of explanation, which would make the explanation valuable instrumentally. That is, XAI is desirable to attain some other value, such as fairness, trust, accountability, or governance. This family of arguments is obviously important, but I argue that explanations are also intrinsically valuable, because unexplainable systems can be dehumanizing. I argue that there are at least three independently valid versions of this kind of argument: an argument from participation, from knowledge, and from actualization. Each of these arguments that XAI is intrinsically valuable is independently compelling, in addition to the more obvious instrumental benefits of XAI.

Cultivating a culture to foster engineering identity
Yen-Lin Han
ASEE Annual Conference & Exposition
2022
https://par.nsf.gov/biblio/10427037

Abstract

The Mechanical Engineering Department at a private, mid-sized university was awarded the National Science Foundation (NSF) Revolutionizing Engineering and Computer Science Departments (RED) grant in July 2017 to support the development of a program that fosters students’ engineering identities in a culture of doing engineering with industry engineers. The Department is cultivating this culture of “engineering with engineers” through a strong connection to industry, and through changes in the four essential areas of, a shared department vision, faculty, curriculum and supportive policies. This paper reports our continued efforts in these four areas and our measurement of their impact. Shared department vision: During the first year of the project, the department worked together to revise its mission to reflect the goal of fostering engineering identity. From this shared vision, the department aims to build a culture to promote inclusive practices. In the past year during the COVID-19 pandemic, this shared vision continued to guide many acts of care and community building for the department. Faculty: The pandemic prompted faculty to reflect on how they delivered their courses and cared for students. To promote inclusive practice, faculty utilized recorded lectures, online collaboration tools and instant messaging apps to provide multiple ways of communication for students. Although faculty summer immersion had to be postponed due to pandemic, interactions with industry continued in design courses, and via virtual seminars and socials. Efforts were also extended to strengthen connections between the department and recent graduates who just began working in industry and could become mentors for current students. Curriculum: A new curriculum to support the goals of this project was rolled out in the 2019-20 academic year. The pandemic hit right in the middle of the initial implementation of this new curriculum. Therefore, to maintain the essence of the new curriculum that emphasizes hands-on, doing engineering and experiential learning in the remote setting, many adjustments and modifications were made. Although initial evidence indicates the effectiveness of the new courses/curriculum even under remote teaching and learning, there are also many lessons-learned that can be examined for future implementations and modifications of the curriculum. Supportive policies: The department agreed to celebrate various acts of care for students and cares for teaching and learning in Annual Performance Reviews. Faculty also worked with other departments, the college, and the university to develop supportive policies beyond the department. For example, based on the recommendation from the department, the college set up a Student Advocate role who would assist students navigate through any incident that make they feel excluded. The new university tenure and promotion guidelines have just been approved with the support from the faculty in the department. Additionally, the department’s effort of building an inclusive culture is aligned with the university initiative for a reform to emphasize anti-racism curriculum. Details of the action items in each area of change that the department has taken to build this inclusive culture to foster engineering identity are shared in this paper. In addition, research gauging the impact of our efforts are discussed.

Students’ Experience of an Integrated Electrical Engineering and Data Acquisition Course in an Undergraduate Mechanical Engineering Curriculum
Yen-Lin Han, Jennifer Turns, Kathleen E. Cook, Gregory S. Mason, Teodora Rutar Shuman
Institute of Electrical and Electronics Engineers
2022
https://doi.org/10.1109/TE.2022.3178666

Abstract

This article presents an innovative course sequence to integrate Electrical Engineering (EE) Fundamentals into the Mechanical Engineering (ME) Instrumentation and Data Acquisition (DAQ) course and reports students’ experience relevant to the sequence’s intended outcomes of helping students learn and connect EE concepts with ME applications and develop their engineering identities. Background: The ME Department at Seattle University was awarded a National Science Foundation Grant to revolutionize its undergraduate program. This project focuses on doing engineering to foster stronger engineering identities. This course sequence is part of the curriculum change for this project and includes open-ended, real-world labs incorporating both EE and DAQ. Research Questions: 1) Engineering Learning: What evidence is there that students learned EE and DAQ concepts and integrated them with ME? 2) Identity Development: How did the students connect the experience to their evolving identity as engineers? 3) Over-Time Experience: How did students experience the course? Methodology: A mix of quantitative and qualitative data was used: quantitative data (a standardized test) and qualitative data source (mini reflections that students provided over the course sequence) were analyzed to address the research questions that connect the educational design aspects and the intended outcomes. Findings: The new course sequence created an opportunity to do engineering in a rich way and provided fertile ground for developing engineering identities. Students understood and retained EE and DAQ concepts at a level equal to when the material was taught via separate courses.

Tech Giant Exclusion
John B. Kirkwood
Florida Law Review
2022
https://digitalcommons.law.seattleu.edu/faculty/838/

Abstract

There is no topic in regulatory policy that is more pressing and more controversial than what to do about the tech giants – Google, Facebook, Amazon, and Apple. Critics claim that that these powerful platforms crush competitors, distort the political process, and elude antitrust law because it cares only about consumer prices. The only solution, they argue, is to break them up.

This diagnosis is mistaken. The tech giants have indeed engaged in anticompetitive conduct. They have excluded rivals selling products on their platforms by demoting them in search results, copying their products, or refusing to deal with them. While these tactics have harmed consumers, they have never been successfully challenged because they have rarely, if ever, created monopoly power or a dangerous probability of monopoly power, which the Sherman Act requires. This requirement should be eliminated.

The tech giants should not be broken up. Splitting them into smaller versions of themselves would result in higher prices or lower quality. Preventing them from selling their own products on their platforms would deprive consumers of choices they value. Nor should the goals of antitrust law be changed. The fundamental aim of antitrust law is to protect consumers and suppliers like workers from anticompetitive conduct. If courts also had to focus on preserving small business and limiting the political influence of large firms, the goals of antitrust would conflict. Courts would have no objective way of resolving the conflict, the rule of law would suffer, and consumers and workers would be hurt.

Congress should instead amend the Sherman Act to prohibit exclusionary conduct that significantly reduces competition, whether or not it results in actual or probable monopoly power. To avoid chilling procompetitive conduct, the change should apply only to the tech giants and should contain strict proof requirements. This careful expansion would make it much easier to deter tech giant exclusion that harms consumers or workers.

Community and Provider Perspectives on Molecular HIV Surveillance and Cluster Detection and Response for HIV Prevention: Qualitative Findings From King County, Washington
Alic G. Shook, PhD, RN, Susan E. Buskin, PhD, MPH; Matthew Golden, MD, MPH; Julia C. Dombrowski, MD, MPH; Joshua Herbeck, PhD; Richard J. Lechtenberg, MPH;  Roxanne Kerani, PhD, MPH

Journal of the Association of Nurses in AIDS Care
May/June 2022
https://doi.org/10.1097/jnc.0000000000000308

Abstract

Responding quickly to HIV outbreaks is one of four pillars of the U.S. Ending the HIV Epidemic (EHE) initiative. Inclusion of cluster detection and response in the fourth pillar of EHE has led to public discussion concerning bioethical implications of cluster detection and response and molecular HIV surveillance (MHS) among public health authorities, researchers, and community members. This study reports on findings from a qualitative analysis of interviews with community members and providers regarding their knowledge and perspectives of MHS. We identified five key themes: (a) context matters, (b) making sense of MHS, (c) messaging, equity, and resource prioritization, (d) operationalizing confidentiality, and (e) stigma, vulnerability, and power. Inclusion of community perspectives in generating innovative approaches that address bioethical concerns related to the use of MHS data is integral to ensure that widely accessible information about the use of these data is available to a diversity of community members and providers.

2021

The Role of Imagination in Ernst Mach’s Philosophy of Science
Char Brecevic
HOPOS: The Journal of the International Society for the History of Philosophy of Science
2021
https://doi.org/10.1086/712974

Abstract

Some popular views of Ernst Mach cast him as a philosopher-scientist averse to imaginative practices in science. The aim of this analysis is to address the question of whether or not imagination is compatible with Machian philosophy of science. I conclude that imagination is not only compatible but essential to realizing the aim of science in Mach’s biologico-economical view. I raise the possible objection that my conclusion is undermined by Mach’s criticism of Isaac Newton’s famous “bucket experiment.” I conclude that Mach’s issue lies not with thought experimentation, tout court, but with the improper use of thought experimentation as it relates to the aim of the biologico-economical development of science.

Hate Speech
Caitlin Ring Carlson
MIT Press
2021
https://mitpress.mit.edu/9780262539906/hate-speech/

An investigation of hate speech: legal approaches, current controversies, and suggestions for limiting its spread.

Hate speech can happen anywhere—in Charlottesville, Virginia, where young men in khakis shouted, “Jews will not replace us”; in Myanmar, where the military used Facebook to target the Muslim Rohingya; in Capetown, South Africa, where a pastor called on ISIS to rid South Africa of the "homosexual curse.” In person or online, people wield language to attack others for their race, national origin, religion, gender, gender identity, sexual orientation, age, disability, or other aspects of identity. This volume in the MIT Press Essential Knowledge series examines hate speech: what it is, and is not; its history; and efforts to address it.

Author Caitlin Ring Carlson, an expert in communication and mass media, defines hate speech as any expression—spoken words, images, or symbols—that seeks to malign people for their immutable characteristics. Hate speech is not synonymous with offensive speech—saying that you do not like someone does not constitute hate speech—or hate crimes, which are criminal acts motivated by prejudice. Hate speech traumatizes victims and degrades societies that condone it. Carlson investigates legal approaches taken by the EU, Brazil, Canada, Germany, Japan, South Africa, and the United States, with a detailed discussion of how the U.S. addresses, and in most cases, allows, hate speech. She explores recent hate speech controversies, and suggests ways that governments, colleges, media organizations, and other organizations can limit the spread of hate speech.

Toxically Clean: Homophonic Expertise, Goop, and the Ideology of Choice
Julie Homchick Crowe
Rhetoric of Health & Medicine
2021
https://muse.jhu.edu/pub/227/article/847798

Abstract

The public’s declining trust in health advice from traditional outlets has long been noted by scholars. But what makes alternative sources for health information appear more trustworthy to some audiences? In this analysis, the author traces the use of expertise and experience as forms of multivocality in the textual artifacts of Gwyneth Paltrow and her enterprise, Goop—specifically those that promote clean eating and detox diets. The analysis illustrates how Goop creates a superficially neutral platform for different voices that make the texts seem polyphonic and by extension more trustworthy given that readers can choose which health plan is right for them. But upon further analysis the author illustrates that Goop blends each voice so that they “move in step” as a choir, combining with Paltrow’s own voice, and ultimately creating an illusion of polyphony and masking a dominant homophonic message that ties together mandates to “ask questions,” empower ourselves, and embrace the assumption that young, slender bodies are signifiers of health and wellness.

Review: Virtual Menageries
Benjamin Schultz-Figueroa
Capacious: Journal for Emerging Affect Inquiry
2021
https://static1.squarespace.com/static/6317b98c456d53408bb7b7a5/t/63181beb1614644ade777638/1662524395664/Review_Virtual_Menageries.pdf

 

The Intersection of Technology Competence and Professional Responsibility: Opportunities and Obligations for Legal Education
LeighAnne Thompson
SSRN
2021
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3892979

Abstract

Technology has fundamentally changed the legal profession and the delivery of legal services. Lawyers routinely use technology, including artificial intelligence, for legal research, e-discovery, document review, practice management, timekeeping and billing, document drafting, and many other tasks. The American Bar Association (ABA) amended the Model Rules of Professional Conduct in 2012 to include an explicit duty of technology competence, and thirty-nine states have adopted a rule requiring technology competence. Further, the ABA adopted a resolution in 2019 urging the courts and profession to address the ethical issues around using artificial intelligence in the practice of law. This essay traces the developing use of technology in the practice of law, examines the ABA’s guidance with respect to the use of technology in practice, and addresses the intersection of legal competence and professional responsibility. Law schools have an obligation to prepare students to be effective, ethical, and responsible participants in the legal profession, which includes technology competence. Further, law schools must establish learning outcomes which provide competency in professional skills needed for competent and ethical participation as members of the legal profession, which also includes technology competence. Law schools have many opportunities to prepare students to be ethical, responsible users of technology in practice. Required Professional Responsibility courses and curricula should include the ethical pitfalls and considerations of using technology in practice. Law schools should also address the intersection of technology and professional responsibility in legal writing courses, clinics, and externships.

2020

Christopher A. Paul
Free-to-play: Mobile video games, bias, and norms
MIT Press
2020
https://doi.org/10.7551/mitpress/12843.001.0001

Description

An examination of free-to-play and mobile games that traces what is valued and what is marginalized in discussions of games.

Free-to-play and mobile video games are an important and growing part of the video game industry, and yet they are often disparaged by journalists, designers, and players and pronounced inferior to games with more traditional payment models. In this book, Christopher Paul shows that underlying the criticism is a bias against these games that stems more from who is making and playing them than how they are monetized. Free-to-play and mobile games appeal to different kinds of players, many of whom are women and many of whom prefer different genres of games than multi-level action-oriented killing fests. It's not a coincidence that some of the few free-to-play games that have been praised by games journalists are League of Legends and World of Tanks. Paul explains that free-to-play games have a long history, and that the current model of premium sales is an aberration. He analyzes three monetization strategies: requirements to spend, where players must make a purchase to gain access; paying for advantage; and optional spending (used by Fortnite, among other popular free-to-play games). He considers how players rationalize or resist spending; discusses sports games and gacha-style games that entice players to make “just one more” purchase; and describes the framing of certain free-to-play games as proper games while others are cast as abusive abominations. Paul's analysis offers a provocative picture of what is valued and what is marginalized in discussions of games.