Balkinization  

Monday, November 05, 2018

Our American Story

Gerard N. Magliocca

I am proud to tell you about a forthcoming book called Our American Story: The Search for a Shared National Narrative, which is now available for pre-order on Amazon. Scholars and elected officials from the left and right (including me) have contributed essays explaining how we think Americans might find common ground in this polarized age. Here is the Abstract:

Over the past few decades, the complicated divides of geography, class, religion, and race created deep fractures in the United States, each side fighting to advance its own mythology and political interests. We lack a central story, a common ground we can celebrate and enrich with deeper meaning. Unable to agree on first principles, we cannot agree on what it means to be American. As we dismantle or disregard symbols and themes that previously united us, can we replace them with stories and rites that unite our tribes and maintain meaning in our American identity? 
Against this backdrop, Our American Story features leading thinkers from across the political spectrum--Jim Banks, David W. Blight, Spencer P. Boyer, Eleanor Clift, John C. Danforth, Cody Delistraty, Richard A. Epstein, Nikolas Gvosdev, Cherie Harder, Jason Kuznicki, Gerard N. Magliocca, Markos Moulitsas, Ilya Somin, Cass R. Sunstein, Alan Taylor, James V. Wertsch, Gordon S. Wood, and Ali Wyne. Each draws on expertise within their respective fields of history, law, politics, and public policy to contribute a unique perspective about the American story. This collection explores whether a unifying story can be achieved and, if so, what that story could be.

Content Moderation, The Press, and the First Amendment - A Discussion with Ben Smith and Josh Marshall

JB

On Thursday, October 25th, I led a discussion with Ben Smith of Buzzfeed and Josh Marshall of Talking Points Memo about how large platforms like Google and Facebook affect the freedom of the press. This was part of a conference on content moderation held at St. John's Manhattan campus and organized by Professor Kate Klonick

It was a great conversation, and it ranged widely over a number of very interesting subjects, including (1) how platforms are being dragged into becoming professional curators of the public sphere; (2) how journalistic organizations adjust to policies of platform companies that are not particularly focused on journalism, much less the survival of journalism; and (3) how platforms affect the economics of digital journalism for better and for worse.

Both Smith and Marshall also had important insights about their work in a media environment dominated by platforms.



Sunday, November 04, 2018

A Backdoor Approach to Calling an Article V Convention

David Super


     Proponents of calling an Article V convention certainly have hit a rough stretch.  In 2017, the most prominent of these groups, the Balanced Budget Amendment (BBA) Task Force, secured three new state resolutions asking Congress to call a convention under Article V, but three other states rescinded old resolutions seeking a convention.  In 2018, neither the BBA Task Force nor the Convention of the States Project (COSP) secured a single new state resolution calling for an Article V convention.  Several states with Republican majorities in both chambers of their legislatures buried or voted down convention resolutions. 

     In addition, a third group seeking an Article V convention to reduce federal powers, the Compact for America, released a legal analysis showing that many of the state resolutions from the 1970s and 1980s, which the Task Force includes in its optimistic count of states, have divergent and often inconsistent language from the BBA Task Force’s newer resolutions.  The American Legislative Exchange Council (ALEC), which has heavily supported the BBA Task Force and COSP, ejected the Compact from its meetings, but with a pro-convention group admitting the validity of arguments that liberal and conservative Article V opponents have long made, the BBA Task Force’s claims are increasingly difficult to defend to serious observers. 

     One might imagine that this would cause funders to flee and leaders of these groups to engage in some introspection.  Instead, the pro-Article V groups are responding to these setbacks by trying to move the goalposts.  This suggests that, if next week’s elections yield pro-Article V majorities in both houses, we could easily see an Article V convention without Article V’s prerequisites being met. 

     Former law professor Rob Natelson, a long-time spokesperson for those advocating an Article V convention to limit federal powers, wrote for the Federalist Society this Spring claiming that the BBA Task Force has understated the number of resolutions in force.  Offering little explanation for why he has not made this claim over the many years he has been working with the BBA Task Force, Prof. Natelson identified some old state resolutions seeking an Article V convention and argued that, because he believes they are not facially inconsistent with considering a balanced budget amendment, they should be aggregated with the old and new BBA resolutions that the Task Force has been counting. 

     This methodology led Professor Natelson to conclude that 33 states have active resolutions, one short of the 34 that would trigger the calling of a convention.  This creates the prospect that if a single additional state passes an Article V resolution, the BBA Task Force will demand that its allies in Congress convene an Article V convention.  Given the BBA Task Force’s strong ties to ALEC and major Republican donors, Republican senators and representatives would find these demands would difficult to brush aside.  Although a few Republicans – notably Arizona Rep. Andy Biggs, one of the most conservative in Congress – staunchly oppose calling an Article V convention, proponents would only have to pick up a handful of naïve Democrats to open up the Constitution to moneyed special interest groups’ wildest fantasies.  Even if Democrats retake one or both chambers of Congress on Tuesday, a coalition of pro-convention Democrats and Republicans could bring a resolution to call an Article V convention to the floor with a discharge petition. 

     Professor Natelson’s idea for adding five states to the Article V tally without any state legislative action would be alarming enough by itself, but it turned out that he was not finished.  A few months, later, he went further and claimed that several states’ rescissions of previous Article V resolutions are not valid.  He disagrees with statements made in the preambles to the rescissions and suggests that these “errors” might render the resolutions invalid on the grounds of “mistake.”  He urges Congress “to weigh whether or not to count purported rescissions flawed by material mistakes.” 

     It appears that in Prof. Natelson’s view, a state legislature commits a mistake almost any time it departs from Article V advocates’ talking points.  For example, he criticizes six states for referring to an Article V convention as a “constitutional convention”.  Article V advocates prefer the euphemistic “convention of the states”.  Neither term is in Article V, but as “constitutional” is an adjective defined as “of or relating to the constitution,” it is difficult to see why a convention whose business is changing the Constitution is not a “constitutional convention”.  He similarly faults five states for preambles expressing concern that an Article V convention could stray to topics far-removed from those motivating states to ask that it could be called.  Article V advocates strenuously insist that such a “runaway” convention would not occur, but nothing in the Constitution imposes any limits on such a convention and it is unlikely that the Supreme Court would enforce such limits even if they existed. 

     Needless to say, Prof. Natelson’s theory of mistake would destabilize the entire legislative process.  By this logic, a future president could disregard the December 2017 tax cut legislation because Congress mistakenly believed that the tax cuts would pay for themselves and not add to the deficit.  Congress certainly operated under plenty of misconceptions when it passed the USA PATRIOT Act; do those mistakes render that legislation invalid? 

     Prof. Natelson apparently sees no irony in claiming to champion returning power to the states while suggesting that Congress may disregard state legislatures’ actions when it regards those legislatures as misinformed.  If Congress was empowered to “correct” state legislatures’ discharge of matters clearly within their purview, states would no longer be sovereign. 

     It would be easy to dismiss Prof. Natelson were he not so central to the efforts of both the BBA Task Force and COSP as well as the enormously powerful ALEC.  It seems unlikely that he would be undermining his credibility with these extreme positions were those groups not seriously contemplating an attempt to get Congress to make an end run around state legislatures without the required 34 valid resolutions.  None of these groups appears to have made any effort to distance themselves from Prof. Natelson’s views. 

     This also puts to rest, once and for all, the notion that advocates of an Article V convention somehow represent a principled departure from politics as usual.  If they are open to disregarding the constitutional prerequisite of 34 state resolutions prior to the calling of an Article V convention, no one should expect that they will respect Article V’s requirement that 38 states ratify any proposed constitutional amendments before they take effect.  And they certainly will not respect state resolutions purporting to control convention delegates or their own promises about limiting the scope of an Article V convention. 

     The effort to call an Article V convention is not about aspirations for a better country.  Instead, it is very much an extension of the single-minded, bare-knuckles, brand of interest-group politics that has dominated in recent years.  The only difference is that the stakes are even greater.  

Saturday, November 03, 2018

The Devil is in the Data

Guest Blogger

Oliver J. Kim


At various points in our nation’s health history, a new technological advance is hyped as the silver bullet for our healthcare system. Of course, it is an axiom of law and public policy that the speed at which technology advances, vastly outpaces the law—that’s why we are coming together for this conference. Without legal, policy, and ethical guidelines to balance innovation, these breakthroughs may lead to unforeseen or even negative consequences for our society in our efforts to make healthcare more affordable and accessible.

One area that I focus on is how technology can be leveraged to reduce health disparities. Concerns about disparities can often focus on the relationship between innovation and costs: if these disruptive technologies are only be available to those who can best afford them, they will continue to widen the healthcare and digital divides in our society.

But there is another area of concern: who is actually in the data? The simplest way to illustrate this concern came from Jerry Smith, the Vice President of Data Sciences and Artificial Intelligence at Cognizant, at a Politico forum on AI. Type “grandpa” into Google’s image search and see what pictures come up. The vast majority of images are old, white men, and when I did my search for this blog, I scrolled through seven rows before I spotted an African American and down to the twentieth before I see a second. Perhaps because it is close to Halloween, I spotted a zombie grandpa and a Sponge Bob grandpa before even seeing an image even remotely depicting someone of my paternal grandpa’s ethnicity.

There is a Catch 22 about equity in the use of big data. Among many communities of color—often those most hurt by health disparities and in need of greater healthcare access—there is a historic mistrust in the healthcare system. Many individuals may fear giving up data due to uncertainties over who has access and how it may be used against them in unforeseen ways. But without this data, we are building systems that may not reflect our society as a whole.

We know well of numerous examples of medical experiments on low-income black communities. These events still have far-reaching effects: as Harriet Washington wrote in Medical Apartheid, “Mainstream medical scientists, journals, and even some news media fail to evaluate these fears in the light of historical and scientific fact and tend instead to dismiss all such doubts and fears as antiscience.” These concerns resonate even today in various aspects of care: in a community study of Washtenaw County, Michigan, African-American participants in a focus group revealed they were concerned about sharing information related to their end-of-life wishes because they were concerned that it could be used against them to ration their care. Current political trends also may make patients—particularly those seeking care that is either stigmatized or at odds with federal policy—fearful of sharing data or even accessing care.

But the datasets that inform our technologies may be biased towards a whiter, more affluent construct of American society and fail to pick up on nuances to create a richer, more accurate picture of society as a whole. For example, the term “Asian American” refers to a wide array of very different ethnicities with varied cultures, languages, socioeconomic statuses, and immigrant experiences. But being able to parse out this diversity has huge implications, particularly in health policy, for the Asian American-Pacific Islander (AAPI) community. One often-cited example is that the incidence of colorectal cancer appears to be similar between whites and Asian Americans as a whole, but when data on Asian Americans was disaggregated, researchers found that certain Asian ethnicities have lower screening rates. In other words, if AAPIs are viewed as a whole, it would be difficult to notice that difference but if the data is further sliced, it is possible to see significant variation. Data disaggregation is a huge issue for AAPI organizations such as the Asian American & Pacific Islander Health Forum, of which I am a board member.

Some of technology’s limits are due to the biases of its human creators. Often in designing a policy or a product, we may fail to meet people where they are. For example, the means that patients use to access patient portals—or get online in general—can present a barrier for some communities to fully access their data. For many African American and Latino patients, a smartphone, not a desktop computer or a tablet, is the most common device for going online. However, such devices may not be suitable for accessing health records: “Although it is possible for patients with smartphones to access any available computer-based PHR using their mobile devices, websites that are not optimized for mobile use can be exceedingly difficult to navigate using the relatively small-sized smartphone screens.” Moreover, federal Medicare and Medicaid incentives for the meaningful use of electronic medical records “do not require that PHRs be easily accessible via mobile devices.

If our data is “bedeviled” because it is not fully comprehensive yet the potential sources—many individuals who may have strong feelings about the healthcare system and value their privacy—of such missing data are reluctant to share, how do we exorcise this devil in the data? Indeed, tools such as artificial intelligence and machine learning threaten to exacerbate health disparities and mistrust in the healthcare system if they are built on a data infrastructure that does not truly look like American society.

What can the law do to address these issues? I’ll be discussing in a forthcoming paper for the conference, tools that policymakers could utilize to help diversify health data by encouraging an environment of trust, security, and accountability between patients and the research community. Policymakers can regulate, including prohibit, behavior that runs counter to their policy goals. For example, a series of federal laws—including Section 185 of the Medicare Improvements for Patients and Providers Act, Section 3002 of the Health Information Technology for Economic and Clinical Health Act, and Section 4302 of the Affordable Care Act—were supposed to encourage more rigorous reporting requirements for Medicare, Medicaid, and the Children’s Health Insurance Program as well as federally certified EMRs. Such richer data sets would “represent a powerful new set of tools to move us closer to our vision of a nation free of disparities in health and health care.” However, such requirements are only useful if they are utilized or enforced.

We have high hopes for using data to improve care: “For example, epigenetic data, if validated in large-scale data, could be used to address health disparities and environmental justice.” That “if” though is crucial, and many demons need to be exorcised from the data before the hype over such data and its related uses meets our actual reality. As Dr. Smith noted, “All the data we get from our lives by its nature has biases built into it.” Bias doesn’t mean animus necessarily, but it does mean we need to think through the data—how it was collected, who is represents—before accepting it carte blanche. 

Oliver J. Kim is Adjunct Professor of Law at the University of Pittsburgh, and Principal, Mousetrap Consulting. You can reach him by e-mail at oliver at mousetrapdc.com



Friday, November 02, 2018

Regulating Social Robots in Health Care

Guest Blogger

Valarie K. Blake

As artificial intelligence is mainstreamed into medicine, robots are designed not just as extensions of human hands but also of human hearts. A social robot is one that is programmed through machine learning to read human emotions (typically through face or voice cues) and to respond with appropriate mimicked emotional states. Social robots may appear to patients like they understand their fears, or pain, or sorrow and might reply with encouragement, or persuasion, or something like empathy. Social robots are already being successfully integrated into medicine: Paro, the therapeutic robot seal designed for elderly patients with dementia, Robin, a robot that helps diabetic children learn self-maintenance, and QTrobot, designed to build social skills in children with autism. Social robot technology is far from attaining the humanoid superiority of Blade Runner or Westworld but the technology is rapidly advancing and it receives a strong assist from our ingrained tendencies to anthropomorphize objects. Many robot scholars think that humans will form significant emotional attachments to social robots; studies of human-robot interactions already demonstrate that humans protect robots from harm, assign them moral significance, and tell them secrets that they might not otherwise share.

The Food and Drug Administration governs safety and proper labeling of medical devices, for instance pacemakers, but these devices are inanimate; patients do not interact with them or believe them to have feelings and personalities. How to regulate the social robot which is neither person nor mere devices? That will depend greatly on their design and how patients respond to them. It is possible that a well-designed social robot could raise ethical and legal issues that evoke more medical practice and less device.

Consider the privacy and surveillance implications of a care robot that works something like Amazon’s Alexa but with much greater social valence. Care robots may be at besides or in homes twenty-four hours a day, seven days a week. If these robots are programmed to convey information back to the medical provider or programmers (as Alexa does), they may witness and record a patient’s daily health behaviors and, if they really work as designed, even elicit confidences and, in turn, be privy to sensitive information about patients’ mental states. What if a patient shares something embarrassing or private about her medical condition? Patients may not realize that information they casually tell a social robot could be relayed back to health care providers, other people on a medical team, IT personnel, or robot maintenance and developers. Or that their information could be stored for much longer than in conventional medical settings. Also, consider important exceptions to privacy in health care contexts. Imagine the stroke patient that tells her in-home care robot that she has been feeling very down and that she has recently been thinking about suicide. Or, a child who discloses to her diabetes-educator robot, Robin, that her father abused her. Care robots might extend the frequency with which providers find out about such issues. How should the care robot respond and will such information be conveyed back to a provider in some manner, how quickly, and whose responsibility will it be to make sure this process work seamlessly?

Social robots may also create opportunities for endless patient surveillance. In the churn and burn of modern medicine, providers spend little time at the bedside of patients. The presence of care robots at homes or bedsides presents the possibility of a nanny state, where robots can “narc” on patients, telling the provider about all sorts of conduct or statements that the patient would prefer the provider otherwise not know. For instance, that the patient is drinking again, or smoking, or not taking their medication regularly, or refusing to remain bed-bound. Could such information be used for important clinician decisions, such as whether the patient is eligible for a surgery or for a scarce resource like an organ? Alternatively, might providers and hospitals seek to use this information to mitigate damages in some malpractice suits?

How a care robot is programmed and deployed may make some of these issues more or less likely. But they are meant to suggest a larger issue¾ never before have we had a category of medical care that is neither perfectly human nor perfectly device. I can think of nothing less like a pacemaker than a high-functioning social robot. Nobody tells their pacemaker secrets, nobody expects their pacemaker to have any autonomy or moral authority, and a pacemaker does not have the capability of relaying secrets back to the medical team. A social robot may be programmed to be social for specific reasons- to be an authority figure, or a proxy for the physicians, or a helper and confidante. The social AI that works well does so because it creates a social relationship with the patient. The more successful, the more the robot raises important issues around autonomy, coercion, privacy, trust in the robot and in the patient-provider relationship, and other matters that look less like issues covered by FDA regulation and far more like the traditional ethical and legal rules governing health care providers.

At minimum, bioethicists, health lawyers, and health care providers need to be engaged with roboticists at the early stages in this new era in robotics to consider the capabilities of these robots and the likely ethical and legal issues they will raise in health care settings. Beyond this, regulatory models will need to be considered that address this new hybrid in medical care. One possible model is to subject the manufacturers of these robots to a form of licensure that requires compliance with a code of ethical standards, somewhat like how health care providers have to follow certain ethical standards set forth by their state medical boards. Additionally, providers who choose to deploy social robots might have additional ethical norms they sign on to speaking to proper usage in clinical practice. More thought needs to go into various options for regulation and the best way to bring such groups into a compliance scheme, without overly burdening beneficial innovations. Social robots that truly engage patients have the potential to change the face of medical care, but the better they work the more likely they are to generate significant ethical and legal challenges.


Valarie K. Blake is Associate Professor at West Virginia University College of Law. You can reach her by e-mail at valarie.blake at mail.wvu.edu and on Twitter at @valblakewvulaw

Thursday, November 01, 2018

Balkinization Symposium on Jonathan Gienapp, The Second Creation-- Collected Posts

JB


Here are the collected posts for our Balkinization symposium on Jonathan Gienapp's new book, The Second Creation: Fixing the American Constitution in the Founding Era (Belknap Press 2018).

Jack Balkin, Introduction to Symposium on Jonathan Gienapp, The Second Creation

Jack Balkin, The Second Creation and Orignalist Theory

Gerard Magliocca, Fixation and Legitimacy

Bernadette Meyler, The Second Creation and Its Implications

Christina Mulligan, Evolving into the Fixed Constitution

Alison L. LaCroix, The Invention of the Archival Constitution







The Constitutional Challenge to Robert Mueller's Appointment

Marty Lederman

One week from today, on Thursday, November 8 at 1:00, a panel of the U.S. Court of Appeals for the D.C. Circuit (Judges Henderson, Rogers and Srinivasan) will hear argument in Miller v. United States, No. 18-3052, a case challenging the constitutionality of Robert Mueller’s appointment to serve as “Special Counsel” for the Russia investigation.

The appellant is Andrew Miller, a potential grand jury witness who refused to comply with a pair of subpoenas requiring him to provide testimony and documents to the grand jury.  Miller argued, among other things, that the subpoenas should be quashed because Mueller was not lawfully appointed.  Miller continued to refuse to comply with the subpoenas even after Chief Judge Howell denied his motion to quash them, and so the Judge held him in contempt.  Miller has appealed from that contempt order.

He makes three separate arguments that Rosenstein’s appointment of Mueller purportedly violated the Appointments Clause of the Constitution, Art. II, § 2, cl. 2, which provides that:
[The President] shall nominate, and by and with the Advice and Consent of the Senate, shall appoint Ambassadors, other public Ministers and Consuls, Judges of the supreme Court, and all other Officers of the United States, whose Appointments are not herein otherwise provided for, and which shall be established by Law: but the Congress may by Law vest the Appointment of such inferior Officers, as they think proper, in the President alone, in the Courts of Law, or in the Heads of Departments.
First, and most fundamentally, Miller argues that Special Counsel Mueller is a “principal” officer and therefore could only be appointed by the President, by and with the advice and consent of the Senate, which was not done here.

Second, Miller argues that even if Special Counsel Mueller is an “inferior” officer, his appointment was nevertheless unconstitutional because Congress has not “by law” vested the Attorney General with the authority to appoint such a Special Counsel (in effect, a question of statutory interpretation, about whether the appointment was ultra vires).

Third, Miller argues that even if Special Counsel Mueller is an “inferior” officer, and even if Congress authorized the Attorney General to appoint him, the Deputy Attorney General, Rod Rosenstein, may not make the appointment because he is not the “Head” of the Department of Justice, even where, as here, he’s exercising the functions of the Office of the Attorney General because the Attorney General himself, Jeff Sessions, is recused from the investigation and is therefore unable to exercise those functions.

If the court of appeals were to hold that the Mueller appointment was unconstitutional, that would, of course, be a very big deal.  In a series of posts over at Just Security, however, I do something of a "deep dive" into Appointments Clause arcana in order to explain why that’s a very unlikely outcome.   I also identify two or three questions the court of appeals need not, and probably should not, try to answer definitively that might have greater implications for developments apart from the case on appeal—including, importantly, the nature and scope of the Acting Attorney General’s authority to remove Mueller.

The first post offers a general overview of the case, with links to the lower court opinions and the briefs on appeal.

In my second post, I explain why there’s actually a serious question, not briefed by the parties, about whether the Appointments Clause applies to Mueller at all (a question the court of appeals can likely avoid by simply assuming, without deciding, that Mueller is a constitutional “officer”).

The third post is perhaps the most important—not for purposes of resolving the Miller appeal itself, but more broadly for what it says about the officers throughout the government, including Mueller, whose independence is secured in part by tenure protections that preclude “at will” removal.  In that post, I take issue with the tentative suggestion in Chief Judge Howell’s opinion that it might be proper—or necessary to avoid a difficult constitutional question—for the court to construe expansively the Acting Attorney General’s authority to remove Mueller under the DOJ Special Counsel regulations.

My fourth post addresses a handful of issues raised by Judge Friedrich in her opinion in a related case raising similar Appointments Clause challenges to Mueller, including: whether the Supreme Court’s decision in Morrison v. Olson (1988) is still "good law"; whether a bipartisan consensus has emerged that Morrison was wrongly decided; whether the Special Counsel is an inferior officer whose appointment was constitutional even under the analysis of the Court’s later decision in Edmond v. United States (1999); and whether the prospect of a possible rescission or amendment of the Special Counsel regulations affords Rosenstein greater control over the conduct of the Mueller investigation, and whether that question has any bearing on the Appointments Clause questions in the Miller case.

In my final post, I briefly discuss what I’ve labeled above as the second and third of Miller’s three Appointments Clause arguments, both of which are predicated on the assumption that the Appointments Clause applies and that Mueller is an inferior officer.

Artificial Intelligence and Predictive Data: The Need for A New Anti-Discrimination Mandate

Guest Blogger

Sharona Hoffman

For the Symposium on The Law And Policy Of AI, Robotics, and Telemedicine In Health Care.

A large number of U.S. laws prohibit disability-based discrimination.  At the federal level, examples are the Americans with Disabilities Act (ADA), the Fair Housing Act, the Rehabilitation Act of 1973, Section 1557 of the Affordable Care Act, and the Genetic Information Nondiscrimination Act.  In addition, almost all of the states have adopted disability discrimination laws.  This might lead to the conclusion that we enjoy comprehensive legislative protection against discrimination associated with health status.  Unfortunately, in the era of big data and artificial intelligence (AI) that is no longer true.

The problem is that the laws protect individuals based on their present or past health conditions and do not reach discrimination based on predictions of future medical ailments.  The ADA, for example, defines disability as follows: a) a physical or mental impairment that substantially limits a major life activity, b) a record of such an impairment, or c) being regarded as having such an impairment.  This language focuses only on employers’ perceptions concerning workers’ current or past health status.
Modern technology, however, provides us with powerful predictive capabilities.  Using available data, AI can generate valuable new information about individuals, including predictions of their future health problems.  AI capabilities are available not only to medical experts, but also to employers, insurers, lenders, and others who have economic agendas that may not align with the data subjects’ best interests. 

AI can be of great benefit to patients, health care providers, and other stakeholders.  Machine learning algorithms have been used to predict patients’ risk of heart disease, stroke, and diabetes based on their electronic health record data.   Google has used deep-learning algorithms to predict heart disease by analyzing photographs of individuals’ retinas.  IBM has used AI to model the speech patterns of high-risk patients who later developed psychosis. In 2016, researchers from the University of California, Los Angeles announced that they had used data from the National Health and Nutrition Examination Survey to build a statistical model to predict prediabetes.  Armed with such means, physicians can identify their at-risk patients and counsel them about lifestyle changes and other preventive measures.  Likewise, employers can use predictive analytics to more accurately forecast future health insurance costs for budgetary purposes. 

Unfortunately, however, AI and predictive analytics generally may also be used for discriminatory purposes.  Take employers as an example.  Employers are highly motivated to hire healthy employees who will not have productivity or absenteeism problems and will not generate high health insurance costs.  The ADA permits employers to conduct wide-ranging pre-employment examinations. Thus, employers may have individuals’ retinas and speech patterns examined in order to identify desirable and undesirable job applicants.   The ADA forbids employers from discriminating based on existing or past serious health problems. But no provision prohibits them from using such data to discriminate against currently healthy employees who may be at risk of later illnesses and thus could possibly turn out to have low productivity and high medical costs.   

This is especially problematic because statistical predictions based on AI algorithms may be wrong.  They may be tainted by inaccurate data inputs or by biases.  For example, a prediction might be based on information contained in an individual’s electronic health record (EHR).  Yet, unfortunately, these records are often rife with errors that can skew analysis.  Moreover, EHRs are often designed to maximize charge capture for billing purposes.  Reimbursement concerns may therefore drive EHR coding   in ways that bias statistical predictions.  So too, predictive algorithms themselves may be flawed if they have been trained using unreliable data.  Discrimination based on AI forecasts, therefore, may not only harm data subjects, it may also be based on entirely false assumptions.   
In the wake of big data and AI, it is time to revisit the nation’s anti-discrimination laws. I propose that the laws be amended to protect individuals who are predicted to develop disabilities in the future.
In the case of the ADA, the fix would be fairly simple.  The law’s “regarded as” provision currently defines “disability” for statutory purposes as including “being regarded as having … an impairment.”  The language could be revised to provide that the statute covers “being regarded as having … an impairment or as likely to develop a physical or mental impairment in the future.”  Similar wording could be incorporated into other anti-discrimination laws.

One might object that the suggested approach would unacceptably broaden the anti-discrimination mandate because it would potentially extend to all Americans rather than to a “discrete and insular minority” of individuals with disabilities.  After all, anyone, including the healthiest of humans, could be found to have signs that forecast some future frailty. 

However, the ADA’s “regarded as” provision is already far-reaching because any individual could be wrongly perceived as having a mental or physical impairment.  Similarly, Title VII of the Civil Rights Act of 1964 covers discrimination based on race, color, national origin, sex, and religion.  Given that all individuals have these attributes (religion includes non-practice of religion), the law reaches all Americans.  Consequently, banning discrimination rooted in predictive data would not constitute a departure from other, well-established anti-discrimination mandates.

It is noteworthy that under the Genetic Information Nondiscrimination Act, employers and health insurers are already prohibited from discriminating based on one type of predictive data: genetic information.   Genetic information is off-limits not only insofar as it can reveal what conditions individuals presently have, but also with respects to its ability to identify perfectly healthy people’s vulnerabilities to a myriad of diseases in the future.

In the contemporary world it makes little sense to outlaw discrimination based on genetic information but not discrimination based on AI algorithms with powerful predictive capabilities.  The proposed change would render the ADA and other disability discrimination provisions more consistent with GINA’s prudent approach.
As is often the case, technology has outpaced the law in the areas of big data and AI.  It is time to implement a measured and needed statutory response to new data-driven discrimination threats.

Sharona Hoffman is Edgar A. Hahn Professor of Law, Professor of Bioethics, and Co-Director of the Law-Medicine Center, Case Western Reserve University School of Law.  You can reach her by e-mail at  sharona.hoffman at case.edu. 

AIs as Substitute Decision Makers

Guest Blogger

Ian Kerr


For the Symposium on The Law And Policy Of AI, Robotics, and Telemedicine In Health Care.
“Why, would it be unthinkable that I should stay in the saddle however much the facts bucked?”

Ludwig Wittgenstein, On Certainty

We are witnessing an interesting juxtaposition in medical decision-making.

Heading in one direction, patients’ decision-making capacity is increasing, thanks to an encouraging shift in patient treatment. Health providers are moving away from substitute decision-making—which permits a designated person to take over a patient’s health care decisions, should that patient’s cognitive capacity become sufficiently diminished. Instead, there is a movement towards supported decision-making, which allows patients with diminished cognitive capacity to make their own life choices through the support of a team of helpers.

Heading in the exact opposite direction, doctors’ decision-making capacity is diminishing, due to a potentially concerning shift in the way doctors diagnose and treat patients. For many years now, various forms of data analytics and other technologies have been used to support doctors’ decision-making. Now, doctors and hospitals are starting to employ artificial intelligence (AI) to diagnose and treat patients, and for an existing set of sub-specialties, the more honest characterization is that these AIs no longer support doctors’ decisions—they make them. As a result, health providers are moving right past supported decision-making and towards what one might characterize as substitute decision making by AIs.

In this post, I contemplate two questions.

First, does thinking about AI as a substitute decision-maker add value to the discourse?

Second, putting patient decision making aside, what might this strange provocation tell us about the agency and decisional autonomy of doctors, as medical decision making becomes more and more automated?

Read more »

Wednesday, October 31, 2018

All That is Solid Melts Into Air

Joseph Fishkin

I woke up yesterday and saw a New York Times news alert on my phone. It read, in full, as follows:
  • Trump Wants to End Birthright Citizenship. President Trump said he was preparing an executive order to end birthright citizenship in the United States. It’s unclear if he can do so unilaterally.
The initial Times report contained little more than this squib. (It was updated in the hours that followed.) But it turned out to be pretty typical.  Much of the news coverage throughout the day treated the legal question here as one whose answer is “unclear,” “much debated,” and generally full of doubt.

When I read the alert, my first thoughts ran to media criticism. How can the Times possibly be so irresponsible as to suggest that it is “unclear,” or in any respect a close question, whether President Trump has the power through executive order to abrogate the bedrock guarantee of birthright citizenship codified in the Fourteenth Amendment?  A reader inclined to be generous to the Times might observe that “unclear” here may function as a sort of journalistic term of art to describe the existence of a disagreement. In other words: Some say the earth is warming, some say it isn’t; perhaps on one (seemingly rather prevalent) view of journalistic craft, a journalist ought therefore to describe it as “unclear” whether the earth is warming.

That view tends to obscure the place where the main action is. Disagreement, yes, but disagreement among whom? For any belief, no matter how crazy or off the wall, there’s likely someone who believes it. It’s not that hard to find people who believe the earth is flat. An underrated but central part of the job of a journalist—I actually think it is one of the most important parts of the job of a journalist—is to perform a credentialing function, separating the mainstream speakers from the cranks and in that way orienting readers. Thus, a good journalist is obligated to make it plain that in the case of global warming, there is disagreement, and that disagreement consists of an overwhelming, near-unanimous scientific consensus on one side and a crew on the other side consisting of a mixture of well-paid professional obfuscators and a few contrarian cranks. (Plus most of a major American political party, for what it’s worth, which on scientific matters is not much.) This framing is controversial; it is also correct. Why? The justification is actually rather complex and has to do with a larger and well-founded background belief in the enterprise of science itself and the internal norms of the scientific community as a powerful method of gaining knowledge about objective reality. Such a background belief doesn’t mean a journalist should uncritically accept everything a scientist says about a scientific topic. But it does help with judgments of credibility. In Jack Balkin’s ever-more-relevant formulation, it helps us make judgments about what’s “off the wall” and what’s “on the wall.” Basically it helps separate the mainstream speakers from the cranks.

With the Constitution this is not so easy. Both climate change and constitutional law are subjects of hot political debate. (Earth shape, less so.) Climate change, the shape of the earth, and constitutional law are also subjects about which some speakers—scientists on the one hand, lawyers, judges, and legal academics and commentators on the other—have role-based claims to various degrees of expert knowledge and authority. The challenge for journalists in reporting on constitutional controversies in our time is that the methods one would typically use to confidently isolate the flat-earthers and the climate skeptics—to show your readers that these people are cranks—do not work in the same way in constitutional law, because they rest ultimately on foundations that are not available in constitutional law.

Imagine that many powerful people decide that the earth is flat. Suppose a powerful social movement advances this view, takes over a major political party and a major cable news network, and gains the power to appoint public officials and others who share the view. A lot of Americans would likely come to agree that the earth is flat. But here’s the thing: They’d still be cranks. Whatever any authority says, the earth is still round. The climate is still changing. You see the problem. Constitutional law, unlike the physical earth, is a human creation whose substance can and does change in significant ways as a result of complex political-legal-sociological processes of constitutional change.  This creates special challenges for journalists covering controversies in constitutional law.

Read more »

Peter Schuck on Congress' Power to Define The Boundaries on Birthright Citizenship

Rick Pildes

I confess to not having delved deeply  into the history of the 14th Amendment's debates regarding the citizenship clause, and probably like most constitutional scholars, I simply assumed from reading the text of the provision that Congress did not have the power to deny citizenship to those born here to persons who did not enter with Congress' "permission" -- ie, legally.

But in starting to read up on this issue, I discovered that one of our leading scholars of immigration, Professor Peter Schuck, has argued for many years, starting in his 1985 book, Without Consent:  Illegal Aliens in the American Polity (written with Rogers Smith), that Congress does have the power under the 14th Amendment "to regulate access to birthright citizenship for groups to whose presence or membership it did not consent."  This past summer, Schuck and Smith published a long essay in which they summarized their views, which can be found here.

Of course, the question of whether Congress can legislate on this issue is a completely different question than  whether the President can act unilaterally by executive order.  Congress has legislated extensively in this area, and the President does not have the power to contravene Congress in this area, given the Constitution's explicit commitment to Congress of the powers over naturalization and other relevant powers Congress has that bear on this issue.

But given the discussions now emerging over the general issue of birthright citizenship and the original understanding of the 14th Amendment's citizenship clause, I thought readers would want to know of Peter Schuck's extensive discussion of this issue.  I'm not endorsing Peter's views, of course, but I think many readers will want to be aware of them.

Update:  After I posted this, Schuck and Smith published a Washington Post op-ed reiterating their analysis in short form, here.

Originalism, Living Constitutionalism, and Birthright Citizenship

Richard Primus

The point of departure for this post is a comment that Keith Whittington made on the subject of the current shouting over birthright citizenship.  I thank Whittington for making suggestions about this post before it was posted.

*                                  *                                  *

In a contribution to the collective discussion provoked by the President’s attack on birthright citizenship, Whittington tweeted the following thought:

“I suppose if you are a living constitutionalist, you might think birthright citizenship is up for grabs. If you are an originalist, however, it is not.”

Whittington is a thoughtful scholar, and I read him here to be saying two things.  One is that it is unprincipled for self-described originalists (say, the Vice President) to say that the Fourteenth Amendment does not, or might not, provide for birthright citizenship, either generally or over the range of cases that inspire the current unpleasantness.  That’s because original meanings are what they are, and the original meaning of the Fourteenth Amendment provides for birthright citizenship. 

The other thing I take Whittington to be saying is that living constitutionalism is susceptible to undesirable changes in constitutional doctrine in a way that originalism is not.  I understand “up for grabs” in Whittington’s tweet to mean “open to legitimate contestation in the here and now.”  On that understanding, the idea on offer is that living constitutionalism is open to change through reinterpretation, so it must be open to contestation over constitutional meaning.  And the results of that contestation will sometimes be unfortunate.  Originalism isn’t open to change through reinterpretation, so it avoids that risk.

Read more »

Daffy Duck, Mickey Mouse, or Goofy?

Gerard N. Magliocca

This will be my last post on birthright citizenship until an executive order actually issues (if it ever does). The President stated today on Twitter that "many legal scholars agree" with his position that he can by executive order end birthright citizenship for some people. If that's true, then the White House can surely produce a list of these experts. Or the OLC opinion that takes this position.

Spoiler Alert: They can produce neither.

My Comments on How to Save A Constitutional Democracy by Huq and Ginsburg

Corey Brettschneider

Aziz Huq and Tom Ginsburg should be lauded for their important and excellent new book, How to Save a Constitutional Democracy. They effectively show that most failures of democracy in the last century didn’t appear suddenly and obviously, like a coup. Rather, contemporary authoritarians have used pre-existing legal and constitutional mechanisms to gradually remove the key features of liberal democracies. This is a book that needs to be read and studied closely by scholars.

Read more »

The Founding and the Origins of Our Constitutionalism, Part III

Guest Blogger

Jonathan Gienapp

For the Symposium on Jonathan Gienapp, The Second Creation: Fixing the American Constitution in the Founding Era (Belknap Press, 2018).


III. Originalism and the Original Constitution

If, following the previous installment of my response, I am right that central aspects of our constitutionalism are not, as is often assumed, inexorable byproducts of the Constitution, but instead are an optional set of practices that have grown up around it, then—as several readers note—that surely holds implications for debates over constitutional originalism. But it is not obvious what those implications are, and, as both Jack Balkin and William Baude indicate, they could vary (and perhaps dramatically) depending upon which kind of originalist one is. Through constructive engagement with my work, each of them identifies different reasons why (at least some) forms of originalism are compatible with my account of the Founding. Even if one accepts their well-reasoned arguments, though, I think many originalists would have difficulty accepting some of what Balkin and Baude point to, at least not without revising longstanding commitments.

Balkin concedes that originalists’ unifying precept—that the original meaning of the Constitution was fixed at the time of adoption—“presumes a particular vision of what the Constitution is and how it operates” and he seems persuaded that this vision was not entrenched in the earliest years of the document’s existence. This fact, though, presents little concern for most originalists, he argues, since they can still believe that the purpose of interpretation is to recover original meaning even if the supporting theory was not in place at the Founding. This is partly because we are not beholden to the intentions or expectations of the Founding generation and partly because it can take time to understand the nature of what people have created. But, according to Balkin, it really comes down to a historicist argument—one the initially focuses on interpretive method but eventually spills over to the fixation thesis itself. As he write, originalists argue for this thesis on the basis of “a historical practice of reading the Constitution.” That is, originalists treat constitutional meaning as fixed not because the Constitution demands it, or because it is in the nature of interpretation, but because “of a living interpretive tradition.” If I am reading Balkin correctly, he seems to agree that originalism is a non-necessary way of thinking about the Constitution that only applies “because of the history of a particular set of rhetorical practices organized around American law and American constitutions.” Had a different set of practices emerged from the 1790s or later, a wholly different way of thinking about the Constitution might have proved natural. In this regard, originalism is not a logical byproduct of the kind of thing that the Constitution is. Instead, originalism is the logical byproduct of a historically-contingent way of imagining and arguing about the Constitution. Balkin’s historicist account indeed compliments my portrayal of the Founding.

But I suspect most originalists would have difficulty accepting Balkin’s description. While originalists are often fond of saying that their theory is based on certain normative commitments—to popular sovereignty, to supermajoritarian rule, to particular conceptions of justice, to judicial constraint—most forms of originalism really begin as theories of what the Constitution itself actually is. Whereas other theories get caught up in what the Constitution ought to be, originalism instead respects the Constitution for what it is. As Baude suggests, effectively summarizing what many originalists think, it is “just in the nature of things that writing down constitutional principles would result in a fixed Constitution that should be interpreted using originalism.” Accepting the historicist point would mean recognizing that it is not, in fact, in the “nature of things” that writing constitutions down results in a particular kind of fixity; it would mean recognizing that it is only because of a contingent set of constitutional habits and practices that we find that train of reasoning logical to begin with. This is where the Founding generation comes in. Irrespective of whether we are beholden to their specific intentions or expectations, we might nonetheless conceive of the Constitution in a particular way, not because of anything essential to the Constitution, but because of practices they contingently initiated. If nothing about the Constitution ever required us to treat it as distinctively written, and thus fixed in a certain way, if we only do so because of a non-essential set of habituated practices, then why must we continue to talk and think that way? Constitutional fidelity would not seemingly require it.

Balkin seems to agree that we don’t have to. Nothing absolutely necessitates our practices, he suggests. Their legitimacy instead derives from the fact that they are part of our living tradition that we sanction through continued usage.

This could be where Balkin and many other originalists might part ways. Ever since he unveiled his pathbreaking theory of living originalism, Balkin tethered originalism to a narrative of redemption, to an account of how the Constitution could be redeemed over time as our law. In this regard, his arguments in this symposium strike me as as a logical extension of his longstanding commitments. But most other originalists, by contrast, remain committed to a narrative of restoration, to an account of how the Constitution can be restored to what it has always been. These originalists would, it seems, be much less eager, let alone willing, to accept Balkin’s historicist account of the origins and development of constitutional practice. I imagine they would still insist that the Constitution is a text because it’s a text and that it’s fixed in a particular way because that’s the only way a constitution could be fixed. The Constitution just is these things no matter what anybody thinks about it. If what I have argued in my book is correct, then I would think these originalists would either have to explain why their particular understanding of constitutional text and fixity automatically inhered in the Constitution from the start (regardless of what practices or assumptions initially surrounded it) or they would have to offer a new set of justifications explaining why the Constitution today should be treated as a particular kind of object with a particular set of attributes even if, in fact, it was never necessary to see it that way at all.

In his characteristically sharp and insightful response, Baude adopts a different perspective, specifically considering if my historical account poses problems for original law originalism—the version of the theory that he and Stephen Sachs have pioneered. Full answers will have to wait for more detailed work, Baude reports, but in the meantime, he gives us plenty to chew on. He poses a series of questions aimed at identifying whether the deep constitutional contestation I illustrate at the Founding in fact undermines the very concept of original law. A great deal hangs on what we mean by law here. On the one hand, I am convinced that the disagreements that followed ratification were fundamental in nature, cutting to the very core of the Constitution. But, as I say in my Introduction, these disagreements always fell under the accepted authority of the Constitution. Everybody acknowledged that, whatever else was true, it was supreme law. But law seems to pick out something more specific in Baude’s theory, not just a source of law but a set of methods or principles for deciphering and elaborating it. I argue that few subjects elicited more confusion or disagreement at the Founding than interpretive methods, but I wonder if Baude and I are talking about the same thing when we reference established rules. And if we are picking out the same thing, perhaps other accepted legal methods, such as Madison’s account of “liquidation” that Baude has so carefully delineated, can explain how certain features of the constitutional landscape became settled over time, and thus how original law originalism and my historical narrative can work in tandem. Bernadette Meyler raises this exact possibility, wondering if my book doesn’t offer, as she puts it, “a larger kind of liquidation narrative.” Perhaps debates in the 1790s, she suggests, liquidated the Constitution itself, transforming it from an inchoate object into a fixed, written text. While, in my book, I had only hoped to suggest that, by 1796, Americans’ distinctive conception of constitutional fixity had emerged, not that all fundamental issues had been settled, nonetheless Meyler’s interpretation could indeed support Baude’s conception of originalism which—as he argues in his sophisticated new article—can and should be wedded to Madison’s idea of liquidation.

I am still digesting Baude’s interesting argument. But while I very much take his and Meyler’s point about it, I wonder about two things. First, how widely accepted was the idea of liquidation beyond Madison? More critically, how much acceptance is needed to make it part of the Framers’ law? Second, would most other originalists take liquidation on board? My hunch is that many of them would balk at the prospect, not least because incorporating it would require abandoning certain commitments. Many of them remain wary of adopting the idea of construction after all, or at least its more radical possibilities.

So it could well be that Balkin’s and Baude’s versions of originalism (as Balkin indicates in his own way) are compatible with my account of the Founding while other forms of the theory are not. Regardless, I eagerly await cashing Baude’s promissory note to know for sure.

Clearly, then, as my own preliminary thoughts on some of these matters reveal, there is much more to be understood about early American constitutionalism and its connections to modern constitutional theory. I hope that others, as invigorated by this symposium as I have been, will help tackle some of the questions that this discussion has provoked. With that in mind, I should end where I began, by sincerely thanking my interlocutors for such substantive engagement with my work. In responding to their incisive commentaries, I have gained a much deeper understanding of my book’s larger implications. I trust other readers have as well.

Jonathan Gienapp is Assistant Professor of History at Stanford University. You can reach him by e-mail at jgienapp at stanford.edu

Regulating Carebots for the Elderly: Are Safety and Efficacy Sufficient Standards of Review?

Guest Blogger

Fazal Khan

For the Symposium on The Law And Policy Of AI, Robotics, and Telemedicine In Health Care.

As in most of Europe and Japan, the U.S. population is aging rapidly as baby boomers have entered retirement and birth rates have been declining for several decades. Demographic trends predict that a looming crisis in the provision of long-term care that will grow worse over time, especially in the climate of restrictive immigration policies and proposals to block grant and cap spending on Medicaid, which devotes 2/3 of its funding to long-term care.  As described below, a potential solution to address the supply gap in long-term care is the increased use of smart machines, embedded sensors, and artificial intelligence empowered robots (I collectively refer to these varied technologies as “carebots”) to deliver long-term care services directly and/or augment the capabilities and productivity of fewer human providers.  However, I contend that the basic FDA framework of reviewing the safety and efficacy of medical devices is inadequate as applied to these technologies as they potentially can harm the autonomy interests of patients that still retain decision-making capacity.  Thus, I propose an enhanced regulatory framework for carebots that addresses a patient’s autonomy concerns in addition to safety and efficacy.
 
Long-term care provides support services for those who have physical or mental impairments that prevent them from autonomously carrying out activities of daily living (e.g., eating, bathing, and dressing) and instrumental activities of living (e.g., preparing meals, managing medications, housekeeping).  Long-term care comprises a spectrum of services, including home health services, adult day centers, assisted living, nursing homes, skilled nursing facilities, and intensive care facilities.  The typical long-term caregiver in the U.S. is not a paid professional, but rather an unpaid relative or friend.  However, the challenge is that this cohort of caregivers is approaching the age where they might need long-term care and it is not clear where that assistance will come from.  Thus, demographers predict that the caregiver support ratio, defined as the number of potential caregivers in the prime age group of 45-64 (includes unpaid family members and paid home aides) for every person over the age 80, will rapidly decline in the near future.  In 2010, the caregiver support ratio was 7 to 1.  In 2030, four years after the first “baby boomers” turn 80, this ratio will be 4 to 1—and by 2050 this ratio will drop to 3 to 1.

Read more »

Big Data: Destroyer of Informed Consent

Guest Blogger

A. Michael Froomkin
Consent, that is ‘notice and choice,’ is a fundamental concept in the U.S. approach to data privacy, as it reflects principles of individual autonomy, freedom of choice, and rationality. Big Data, however, makes the traditional approach to informed consent incoherent and unsupportable, and indeed calls the entire concept of consent, at least as currently practiced in the U.S., into question.

            Big Data kills the possibility of true informed consent because by its very nature one purpose of big data analytics is to find unexpected patterns in data. Informed consent requires at the very least that the person requesting the consent know what she is asking the subject to consent to. In principle, we hope that before the subject agrees she too comes to understand the scope of the agreement. But with big data analytics, particularly those based on Machine Learning, neither party to that conversation can know what the data may be used to discover. 

Nor, given advances in re-identification, can either party know how likely it is that any given attempt to de-identify personal data will succeed. Informed consent, at least as we used to understand it, is simply not possible if medical data is to become part of Big Data, and ever so much more so if researchers intend to link personal health records with data streams drawn from non-medical sources because what we will learn with the information cannot be predicted. Similar—indeed, maybe worse—problems arise with big data analytics uses outside the context of medical research, especially as informed consent seemed a plausible solution to the problem of routinized or non-existent consent for data acquisition.

Read more »

Care Robots

Guest Blogger

Paul Vincent Tongsy

For the Symposium on The Law And Policy Of AI, Robotics, and Telemedicine In Health Care.

            The use of Artificial Intelligence (AI) and care robotics are currently viewed as two separate branches of advancements in modern medicine. For now, AI and care robots are considered as advanced tools to augment the skills and intelligence of human professionals who provide most of the care. There will be a time in the not-so-distant future, when AI achieves general or superintelligence. Simultaneously, they will become more independent and mobile, while other specific medical devices/robots will become more miniaturized and advanced. In vivo, in vitro/prosthetics and other therapeutic robots will all likely become more advanced and prevalent.

Done right, AI will likely merge with robotics and these resulting care robots will have so much potential to enhance care. Envision the common use of robots as care providers throughout the human lifespan (e.g. nannies, companions/assistants, and other possibilities). This prospect is both exciting for some people, and terrifying for others. For better or worse, it is rather common for care robots to become ubiquitous at a rapid pace once they overcome the hurdles of reaching successful initial adoption.

In formal settings or institutions like hospitals, AI adoption is complicated and undertaken by a multi-disciplinary team. Outside of formal settings, AI adoption will be dependent on multiple variables. These variables may include but are not limited to: competition, government sponsorship or regulation, and an economy driven market. Once the shock value wears off and the care robots are found effective and safe, it is likely that the public will demand “the best, brightest and newest” care robot. Prohibitively expensive at first, care robot makers might still sell out and profit tremendously, much like the best smart phone or car companies of today.

Naturally, there already are or will be glaring issues before, during and after the “robot invasion”.  For healthcare in particular, these issues will include paramount concerns for patient safety and privacy. Reevaluating ethics for both humans (bioethics) and robots (roboethics) will also become crucial as more care robots are designed, produced and adopted. For these robots to be successful, a lot of care and caring will have to be taught to them by their human creators. These teachings will include what care looks like and how to provide it safely. The nursing profession and its theories of care can definitely be a valuable resource for care robot learning.

Read more »

Tuesday, October 30, 2018

Who is Your Therapy App Working For?

Frank Pasquale

For the Symposium on The Law and Policy of AI, Robotics, and Telemedicine in Health Care.

Myriad programs in Apple’s App Store claim to address mental health concerns. A larger assortment of less-vetted apps crowd the virtual shelves of the Google Play Store. Cheap mental health apps have been a godsend for health systems pressed by austerity to cut costs, like Britain’s National Health Service (NHS). Via an “NHS Apps Library,” UK authorities have recommended at least fourteen apps for those suffering from depression and anxiety. Unfortunately, according to a study in the journal Evidence Based Mental Health, "the true clinical value of over 85% of NHS accredited mental health apps is at present impossible to determine." Only 2 of the apps studied applied validated metrics. Nor is it clear how app stores arrange their wares, elevating some software and occluding others. Nor are the politics and ideology of app makers apparent on first glance.

This opacity is important, because it is by no means clear that digital substitutes for (or even complements to) extant mental health professionals will live up to the type of fiduciary and other standards that are expected of human providers. Already articulated in the realm of digital assistants, these concerns will only be more pronounced in health care.
Read more »

Authoritarian Constitutionalism in Facebookland

David Pozen


In an article published earlier this year, Kate Klonick memorably described social media platforms like Facebook as the “New Governors” of online speech. These platforms operate with significant legal discretion. Because of the state action doctrine, they are generally assumed to be unconstrained by the First Amendment. Because of Section 230 of the Communications Decency Act, they enjoy broad immunity from liability for the user-generated content posted on their sites. Nevertheless, Klonick showed, these platforms have created intricate rules for determining whether and how to limit the circulation of material that is arguably offensive or obscene, rules that in some respects appear to track U.S. free speech norms. By studying internal Facebook documents and interviewing employees, Klonick began to illuminate the mysterious world of social media content moderation.

Klonick’s latest essay pushes this project further.[*] In “Facebook v. Sullivan,” she investigates Facebook’s use of the “public figure” and “newsworthiness” concepts in its content moderation decisions. Again drawing heavily on interviews, Klonick recounts how Facebook policymakers first turned to the public figure concept in an effort to preserve robust debate on matters of widespread concern while cracking down on the cyberbullying of “private” individuals. Newsworthiness, meanwhile, emerged over time as a kind of all-purpose free speech safety valve, invoked to justify keeping up content that would otherwise be removable on any number of grounds. Defining public figures and newsworthiness in an attractive yet administrable manner has been a constant challenge for Facebook—the relevant First Amendment case law is no model of clarity and, even if it were, translating it to a platform of Facebook’s scale would be far from straightforward—and Klonick walks us down the somewhat mazy path the company has traveled to arrive at its current approach.

Klonick’s essay offers many intriguing observations about Facebook’s “free speech doctrine” and its relationship to First Amendment law and communications torts. But if we step back from the details, how should we understand the overall content moderation regime that Klonick is limning? At one point in the essay, Klonick proposes that we think of it as “a common law system,” given the way Facebook’s speech policies evolve “in response to new factual scenarios that present themselves and in response to feedback from outside observers.” The common law analogy is appealing on several levels. It highlights the incremental, case-by-case development that some of these policies have undergone, and it implies a certain conceptual and normative integrity, an immanent rationality, to this evolutionary process. Facebook’s free speech doctrine, the common law analogy might be taken to suggest, has been working itself pure.

Common law systems are generally understood to involve (i) formally independent dispute resolution bodies, paradigmatically courts, that issue (ii) precedential, (iii) written decisions. As Klonick’s essay makes clear, however, Facebook’s content moderation regime contains none of these features. The regulators and adjudicators are one and the same, and the little we know about how speech disputes get resolved and speech policies get changed at Facebook is thanks in no small part to Klonick’s own sleuthing.

A very different analogy thus seems equally available: Perhaps Facebook’s content moderation regime is less like a common law system than like a system of authoritarian or absolutist constitutionalism. Authoritarian constitutionalism, as Alexander Somek describes it, accepts many governance features of constitutional democracy “with the noteworthy exception of … democracy itself.” The absence of meaningful democratic accountability is justified “by pointing to a goal—the goal of social integration”—whose attainment would allegedly “be seriously undermined if co-operation were sought with [the legislature] or civil society.” Absolutist constitutionalism, in Mark Tushnet’s formulation, occurs when “a single decisionmaker motivated by an interest in the nation’s well-being consults widely and protects civil liberties generally, but in the end, decides on a course of action in the decisionmaker’s sole discretion, unchecked by any other institutions.”

The analogy to authoritarian/absolutist constitutionalism calls attention to the high stakes of Facebook’s regulatory choices and to the awesome power the company wields over its digital subjects as a “sovereign” of cyberspace. It also foregrounds the tension between Facebook’s seemingly sincere concern for free speech values and its explicit aspiration to make users feel socially safe and “connected” (and hence to maximize the time they spend on the site), a tension that is shaped by market forces but ultimately resolved by benevolent leader and controlling shareholder Zuckerberg.

There is a jarring scene in Klonick’s essay, in which a photograph from the Boston Marathon bombing that is “graphically violent” within the meaning of Facebook’s rules is dutifully taken down by content moderators, only to be put back up by unnamed executives on account of its newsworthiness. These executives may have had good intentions, and they may even have made the right call. The episode is nonetheless a reminder of the potential for arbitrary and cynical assertions of authority from on high in Facebookland—and of the potential disconnect between the policies that Facebook adopts and the policies that a more democratic alternative would generate.

Systems of authoritarian constitutionalism and absolutist constitutionalism are not lawless. But their commitment to civil liberties and the public interest is contingent, instrumental, fragile. If one of these models supplies the most apt analogy for Facebook’s regulation of online speech, then the crucial tasks for reformers might well have less to do with refining the company’s content moderation rules than with resisting its structural stranglehold over digital media.

Three response pieces identify additional concerns raised by Facebook’s content moderation practices. Enrique Armijo argues that First Amendment law on “public figures” can and should be embraced by Facebook and Twitter, but that constitutional protections for anonymous speech become far more frightening when exported to these platforms. To the extent that First Amendment law has predisposed platform architects to be tolerant of anonymous speech, Armijo suggests, it has led them disastrously astray.

Amy Gajda points out that Facebook’s “newsworthiness” determinations have the potential to affect not only millions of Facebook users, at great cost to privacy values, but also an untold number of journalists. Given courts’ unwillingness to define newsworthiness when reviewing privacy claims, Facebook’s “Community Standards” could become a touchstone in future media litigation unless and until judges become more assertive in this area.

Finally, Sarah Haan reminds us that Facebook’s decisions about how to regulate speech are inevitably influenced by its profit motive. Indeed, Facebook admits as much. Maintaining a prosocial expressive environment, Haan observes, is difficult and expensive, and there is little reason to expect Facebook to continue to privilege the preferences of American customers as its business model becomes increasingly focused on other parts of the globe.

For those of us who worry about the recent direction of U.S. free speech doctrine, Haan’s invitation to imagine a future Facebook less beholden to First Amendment ideology is also an invitation to imagine a range of new approaches to online content moderation and social media regulation. And that is precisely what the Knight Institute’s next visiting scholar, Jamal Greene, will be asking academics and advocates to do in a forthcoming paper series.




[*] Klonick’s essay is being published, along with three response pieces, as the seventh and final installment in a series I have been editing for the Knight First Amendment Institute at Columbia University.

Older Posts

Home