Book Review: Atlas of AI

By Sancho McCann · , edited:

Review of: Kate Crawford, Atlas of AI (New Haven: Yale University Press, 2021).

Atlas of AI is a time­ly book by Kate Crawford cov­er­ing the var­i­ous im­pli­ca­tions of AI to­day and through­out his­to­ry. There is in­tense pub­lic scruti­ny on the is­sues cov­ered in this book: labour, en­er­gy con­sump­tion, cli­mate change, dis­crim­i­na­tion, pri­va­cy, and gov­ern­ment ac­count­abil­i­ty. However, these con­cerns are of­ten over­shad­owed by the gen­er­al en­am­our­ment with tech­no­log­i­cal ar­ti­facts like Amazon Alexa, Google Nest, Tesla, and mo­bile de­vices. This book sit­u­ates AI in the phys­i­cal and so­cial world. Crawford shows us the ways that AI ex­ploits and changes our re­la­tion­ships with the Earth and with each oth­er.

Crawford cov­ers many of the canon­i­cal ex­am­ples and is­sues flowing from AI. You will see many of the ex­am­ples from this book also ref­er­enced in any cur­rent writ­ing that views the field through a le­gal, so­cial, or po­lit­i­cal lens. The lat­ter chap­ters are an es­pe­cial­ly good pre­sen­ta­tion of AI as a tool of state pow­er. While I found this to be a valu­able, scope-defining work for peo­ple new­ly think­ing about the con­se­quences of AI use and de­vel­op­ment, I did find por­tions of it repet­i­tive, vague, or not well-con­nect­ed to a cen­tral the­sis. These are easy enough to avoid or skim over.

Summary

This book opens with an im­age of two peo­ple hold­ing up what ap­pears to be a main­frame com­put­er on their shoul­ders, them­selves crouched on top of a frag­ment of the Earth. The com­put­er is pro­ject­ing, through a lens, a dis­tort­ed im­age of the Earth’s sur­face. That pro­jec­tion sits above the scene.

The open­ing im­age from Atlas of AI.

This im­age neat­ly cap­tures the themes in this book. AI ex­ploits hu­man labour and re­sources ex­tract­ed from the Earth. The con­cep­tion of AI as dis­em­bod­ied, while some­times a help­ful ab­strac­tion—and high­lights one way in which AI might nev­er be Really Intelligent—serves to ob­scure the ma­te­r­i­al needs and effects of AI. This is the cen­tral theme of the first two chap­ters, “Earth” and “Labor.” The mid­dle of the book fo­cus­es on the com­put­er sci­ence of AI and ma­chine learn­ing. AI, es­pe­cial­ly ma­chine learn­ing, trans­forms data through its op­er­a­tion. It cre­ates pro­jec­tions of re­al­i­ty into dig­i­tal rep­re­sen­ta­tions. These are nec­es­sar­i­ly an ap­prox­i­ma­tion, with the goal that the rep­re­sen­ta­tion pre­serves fea­tures that are use­ful for the in­tend­ed task. In the im­age above, this ap­prox­i­ma­tion ap­pears as a butterfly map, a pro­jec­tion that presents more ac­cu­rate­ly the dis­tances be­tween lo­ca­tions in north­ern lat­i­tudes. The pro­jec­tion ex­ag­ger­ates the dis­tance and in­tro­duces stark dis­con­ti­nu­ities be­tween land mass­es in the south­ern hemi­sphere. Of course all map pro­jec­tions onto a flat sur­face must make tradeoffs, but Crawford’s se­lec­tion of this par­tic­u­lar pro­jec­tion reflects a theme de­vel­oped in the lat­ter por­tions of this book: “jin­go­is­tic idea of sov­er­eign AI” and ethics frame­works that are “over­whelm­ing­ly pro­duced by eco­nom­i­cal­ly de­vel­oped coun­tries, with lit­tle rep­re­sen­ta­tion from Africa, South and Central America, or Central Asia.”

Earth

The first chap­ter high­lights the phys­i­cal dam­age to the Earth when we ex­tract the ma­te­ri­als need­ed for our tech­nol­o­gy: e.g. lithi­um mines in Nevada, rare earth el­e­ments of­ten lo­cat­ed in emerg­ing mar­kets and conflict zones, and la­tex need­ed for transat­lantic ca­bles that is ex­tract­ed from a species of tree in Southest Asia. That is just some of what is re­quired to cre­ate our phys­i­cal tech­no­log­i­cal tools. Crawford also high­lights the on­go­ing en­er­gy needs to use these tools. Training very large deep neur­al net­works is a promi­nent ex­am­ple.

Labor

The chap­ter on Labor shows us how hu­man labour is both con­trolled by AI and ex­ploit­ed in the de­vel­op­ment of AI. AI is used to mon­i­tor, track, and pre­dict de­mand and ca­pac­i­ty. It is used to make hir­ing de­ci­sions. AI also re­lies on labour-in­ten­sive, crowd­sourced, man­u­al la­belling of data that is need­ed for its train­ing data. Crawford re­views Amazon’s Mechanical Turk and Google’s re­CAPTCHA tech­nolo­gies.

Data and Classification

I have grouped these chap­ters to­geth­er in this re­view be­cause they en­com­pass the gen­er­al fo­cus of cur­rent-day re­search into AI fair­ness and they can’t be con­sid­ered sep­a­rate from each oth­er. These two chap­ters are a high­light of the book for me. One of my only crit­i­cisms of these chap­ters is how much of each might pos­si­bly have been also dis­cussed in the oth­er chap­ter.

The chap­ter on data is the most co­her­ent in the en­tire book, with a great col­lec­tion of on-point ex­am­ples. It does slight­ly un­der­play the cur­rent aware­ness of dataset is­sues, though. The data chap­ter has a good dis­cus­sion of pri­va­cy and own­er­ship is­sues and the prac­tice of us­ing pub­lic data to cre­ate pri­vate mod­els.

The classification chap­ter high­lights the cir­cu­lar­i­ty and cy­cle of feed­back be­tween data gen­er­a­tion, train­ing, classification, and back to data gen­er­a­tion. Classification risks repli­cat­ing in the fu­ture the bi­as­es ex­hib­it­ed in the past and this is tied up both with the data used to train an AI and the sys­tems sur­round­ing an AI.

Making these choic­es about which in­for­ma­tion feeds AI sys­tems to pro­duce new classifications is a pow­er­ful mo­ment of de­ci­sion mak­ing: but who gets to choose and on what ba­sis? The prob­lem for com­put­er sci­ence is that jus­tice in AI sys­tems will nev­er be some­thing that can be cod­ed or com­put­ed. It re­quires a shift to as­sess­ing sys­tems be­yond op­ti­miza­tion met­rics and sta­tis­ti­cal par­i­ty and an un­der­stand­ing of where the frame­works of math­e­mat­ics and en­gi­neer­ing are caus­ing the prob­lems. This also means un­der­stand­ing how AI sys­tems in­ter­act with data, work­ers, the en­vi­ron­ment, and the in­di­vid­u­als whose lives will be affected by its use and de­cid­ing where AI should not be used.

Affect

This chap­ter is a fo­cused cri­tique on a par­tic­u­lar ap­pli­ca­tion of AI: affect (rough­ly, emo­tion) recog­ni­tion. It echoes Crawford’s cri­tique from AI Now’s 2019 Report and else­where. It points out the du­bi­ous foun­da­tions un­der­ly­ing these at­tempts at clas­si­fy­ing peo­ple’s emo­tion­al state based on their fa­cial ex­pres­sions. This is an­oth­er good chap­ter in terms of co­her­ence and writ­ing style. I would not be sur­prised if it had ex­ist­ed as a stand­alone piece be­fore be­ing used in this book.

State

While state use of AI was dis­cussed tan­gen­tial­ly through­out the book (COMPAS, law en­force­ment gen­er­al­ly, bor­der cross­ings, mug shot analy­sis), this chap­ter is os­ten­si­bly fo­cused on state in­tel­li­gence, sur­veil­lance, and de­ci­sion-mak­ing. However, it un­avoid­ably weaves to­geth­er a sto­ry that in­cludes pri­vate re­search (e.g. Google, Microsoft) and de­ci­sion-mak­ing span­ning NSA in­tel­li­gence gath­er­ing to apps like “Neighbors, Citizen, and Nextdoor.” Crawford dis­cuss­es the Snowden doc­u­ments, the goal of mil­i­taries around the world to gain the AI edge, and en­list­ment of pri­vate cor­po­ra­tions in these efforts. The en­am­our­ment with AI sur­veil­lance and de­ci­sion-mak­ing has also worked its way down to lo­cal po­lice forces, mu­nic­i­pal­i­ties, and ad­min­is­tra­tive bod­ies. Crawford presents a com­mon­ly ref­er­enced ex­am­ple of gov­ern­ment de­ci­sion-mak­ing gone wrong: a sys­tem used in Michigan to at­tempt iden­ti­fy un­em­ploy­ment-in­sur­ance fraud. It “in­ac­cu­rate­ly identified more than forty thou­sand Michigan res­i­dents of sus­pect­ed fraud.”

Power

This chap­ter is the book’s con­clu­sion but it presents a new idea, too: en­chant­ed de­ter­min­ism. This is the para­dox­i­cal idea that AI sys­tems are both “en­chant­ed” in that they can do what we haven’t told them to do, but also de­ter­min­is­tic enough to be safe and eth­i­cal for de­ploy­ment in high-stakes de­ci­sion-mak­ing. I have also ar­gued that you can’t have it both ways, es­pe­cial­ly when we ex­pect a de­ci­sion-mak­er to act with dis­cre­tion. An al­go­rithm will ei­ther be fol­low­ing de­ter­min­is­tic in­struc­tions, which is a fet­ter­ing of dis­cre­tion, or it will in­cor­po­rate ran­dom­ness, and that also isn’t what we ex­pect from dis­cre­tion.

I like Crawford’s re­minder that we must at­tend to the com­mitt­ments we are mak­ing as we de­vel­op and de­ploy AI sys­tem in pri­vate and through the state.

Critiques

“AI is nei­ther artificial nor in­tel­li­gent.” This is a nice pithy point, but I don’t think it is one that any­one dis­agrees with, at least with the sense that Crawford sets up through con­trast in the fol­low­ing sen­tence. While some peo­ple have op­ti­mism that we might some­day cre­ate real in­tel­li­gence, many peo­ple doubt that this is pos­si­ble, and no­body to­day sug­gests that we have done so. This is why it is called artificial in­tel­li­gence. The term artificial is used not to im­ply that there are no real ar­ti­facts or effects of AI, but mere­ly to in­di­cate that they are not tru­ly in­tel­li­gent. They are artifices of in­tel­li­gence. They are also “artificial” in an­oth­er sense: that they are cre­at­ed by us, rather than oc­cur­ing nat­u­ral­ly. I think this is whol­ly con­sis­tent with the point that Crawford is try­ing to make with this apho­rism but with­out the need for Crawford’s lin­guis­tic at­tack on the term it­self.

In the Earth chap­ter, Crawford cites the work of Emma Strubell et al. for the propo­si­tion that “run­ning only a sin­gle NLP mod­el pro­duced more than 660,000 pounds of car­bon diox­ide emis­sions.” Strubell et al. were re­port­ing the cost to train such a net­work. It is now well-rec­og­nized that train­ing large neur­al net­works has a large cost to our en­vi­ron­ment, but also that the cost of run­ning them at in­fer­ence time is not near­ly cost­ly.

I found the con­tin­ued ref­er­ence to log­ics to be un­help­ful­ly vague. Crawford of­ten does not elab­o­rate on what a par­tic­u­lar log­ic ac­tu­al­ly en­tails. It seems that log­ics is used vari­ably as a syn­onym for “meth­ods,” “effects,” or “val­ues.” But the in­tend­ed mean­ing isn’t clear and seems to shift de­pend­ing on the us­age.

Last, it seems like the ed­i­tor may have de­mand­ed ad­di­tion­al con­tent to fill out some of the chap­ters, es­pe­cial­ly the chap­ter on labour. The Google TrueTime ex­am­ple seems shoe­horned into this chap­ter and it is not a good fit. Google TrueTime is a dis­trib­uted, con­sis­tent clock that is avail­able on Google servers. This is pre­sent­ed in a chap­ter de­vot­ed to high­light­ing the ways in which AI re­lies on and en­ables ex­ploita­tive labour prac­tices. It is hard for me to see the ar­gu­ment that TrueTime does this. TrueTime is used to con­sis­tent­ly cre­ate time­stamps for events that oc­cur across mul­ti­ple servers and dat­a­cen­ters, sep­a­rat­ed by thou­sands of kilo­me­tres. This kind of con­sis­ten­cy is crit­i­cal to pro­grams that deal with any sort of ac­count­ing of re­sources: e.g. bank ac­counts, sell­ing con­cert tick­ets. Many peo­ple might be at­tempt­ing to in­ter­act with such a sys­tem si­mul­ta­ne­ous­ly through sep­a­rate servers and there has to be a way for those sep­a­rate servers to de­ter­mine which in­ter­ac­tion hap­pened first. Otherwise, two peo­ple might be sold the same final tick­et to a con­cert. Crawford fails to con­nect this tech­no­log­i­cal ex­am­ple to the the­sis de­vel­oped in this chap­ter: that AI serves the goals of con­trol­ling “bod­ies in space and time” and of “con­trol­ling the po­lit­i­cal or­der.” Other ex­am­ples that Crawford pro­vides are com­pelling; this one is not.

Again, these are mi­nor points that can be skimmed past or set aside while you read this work. The big­ger pic­ture is worth see­ing.

Conclusion

For a gen­er­al au­di­ence un­versed on the full scope of the use and de­vel­op­ment of AI, this is a help­ful book. Much of the ma­te­r­i­al trans­fers to con­cerns with oth­er large-scale tech­nol­o­gy: bat­ter­ies, blockchain, dig­i­tal cur­ren­cy, NFTs. And AI as a tool of state pow­er will need pub­lic reg­u­la­tion.