by Carol Westbrook
If all the data from the 70 million Fitbits and other wearables in the U.S. were analyzed for clusters of flu-like symptoms, we might have known about the coronavirus epidemic, traced the contacts and perhaps slow its spread, even before widespread testing was available. This is the power of wearable health technology.
Did you know your Fitbit could do that?
What sparked my interest in Fitbit health trackers was the recent news that Google acquired Fitbit, Inc., for $2.1 billion! I thought that wearables were old news, just another fad in consumer electronics that has already passed its time. What value did Google see in wearables?
Wearables are devices used to improve fitness and overall health by promoting and increasing activity. These small electronic devices are worn as wristbands or watches that detect and analyze some of the body’s physical parameters such as heart rate, motion, and GPS location; some can measure temperature and oxygen level, or even generate an electrocardiogram. What is unique about wearables is that they transmit this data to the wearer’s cell phone, and via the cell phone to the company’s secure database in the cloud. For example, the owner inputs height, weight, gender and age, and algorithms provide realtime distance and speed of a run, calories expended, heart rate, or even duration and quality of sleep. Fitness goals are set by the wearer or by default. The activities are tracked, and the program will send messages to the wearer about whether their goals were achieved, and and prompts to surpass these goals. Fitness achievements can be shared with friends of your choice–or with Fitbits’ related partners, even without your express consent. Read more »
by Evan Selinger
My colleague Thomas Seager and I recently co-wrote “Digital Jiminy Crickets,” an article that proposed a provocative thought experiment. Imagine an app existed that could give you perfect moral advice on demand. Should you use it? Or, would outsourcing morality diminish our humanity? Our think piece merely raised the question, leaving the answer up to the reader. However, Noûs—a prestigious philosophy journal—published an article by Robert J. Howell that advances a strong position on the topic, “Google Morals, Virtue, and the Asymmetry of Deference”. To save you the trouble of getting a Ph.D. to read this fantastic, but highly technical piece, I’ll summarize the main points here.
It isn’t easy to be a good person. When facing a genuine moral dilemma, it can be hard to know how to proceed. One friend tells us that the right thing to do is stay, while another tells us to go. Both sides offer compelling reasons—perhaps reasons guided by conflicting but internally consistent moral theories, like utilitarianism and deontology. Overwhelmed by the seeming plausibility of each side, we end up unsure how to solve the riddle of The Clash.
Now, Howell isn’t a cyber utopian, and he certainly doesn’t claim technology will solve this problem any time soon, if ever. Moreover, Howell doesn’t say much about how to solve the debates over moral realism. Based on this article alone, we don’t know if he believes all moral dilemmas can be solved according to objective criteria. To determine if—as a matter of principle—deferring to a morally wise computer would upgrade our humanity, he asks us to imagine an app called Google Morals: “When faced with a moral quandary or deep ethical question we can type a query and the answer comes forthwith. Next time I am weighing the value of a tasty steak against the disvalue of animal suffering, I’ll know what to do. Never again will I be paralyzed by the prospect of pushing that fat man onto the trolley tracks to prevent five innocents from being killed. I’ll just Google it.”
Let’s imagine Google Morals is infallible, always truthful, and 100% hacker-proof. The government can’t mess with it to brainwash you. Friends can’t tamper with it to pull a prank. Rivals can’t adjust it to gain competitive advantage. Advertisers can’t tweak it to lull you into buying their products. Under these conditions, Google Morals is more trustworthy than the best rabbi or priest. Even so, Howell contends, depending on it is a bad idea.
Read more »