Wednesday, November 29, 2023

Judgement Day

One thing that makes me sooooo glad I no longer teach at community college is "ChatGP".

Mined you, I learned early on that if you want to retain your illusions about the intellectual capabilities and critical thinking skills of humanity, never, never assign a research paper to undergrads. You'll regret it. Trust me.

(My favorite undergrad-research-paper story comes from the year I spent adjunct-teaching at the now-defunct Concordia College here in Portland.

For a Lutheran-founded sectarian school Concordia seemed to have an outsized athletics program, so, of course, teaching Geology 100 what I got were jocks for rocks. The rockiest of the jocks was this hulking baseball player who always seemed to be lost on his way to the dugout.

Anyway, this was fairly early in my adjunct days, and I made the mistake of assigning these jocks a paper.

The other six or seven turned in the usual warmed-over hash of internet "research", either poorly- or un-edited and full of easily ferreted-out geologic mistakes alongside the hash of creative mispellings and grammatical atrocities.

Harmon Killebrew, though?

Turned in a banger; a well-reasoned, well-written survey piece on earthquakes.

SO well-written that, knowing this joker, I was immediately suspicious. Turning to my trusty laptop, I typed in the first sentence of Crash Davis' magnum opus and was immediately rewarded - seriously, the FIRST Google hit - with the identical paper written by a couple of Stanford undergrads several years earlier.

Numbnuts hadn't even bothered to cover his tracks by hunting down the search results a bit for his theft.

The next class meeting I laid two documents on his desk; his - which, by the way, he'd "improved" by changing the title from "Earthquakes" to "Earth Quakes", thus making his only contribution to geologic scholarship an elementary school spelling error - and theirs, and inviting him to explain how the two Stanford students had, in fact, plagiarized from his work.

He got an "F", which may or may not have slowed his journey to the Show. Dunno - the school folded a short while later, so all the records disappeared.)

Anyway...

So I've been following along with how the current cohort of college students and their instructors are dealing with this "AI" gimmick. It seems troublesome for the teachers and useful for the lazier of the students, but I think the verdict is still out.

The larger question of "artificial intelligence", though...

I don't recall where and when, but IIRC it was in a Bill James piece about computer "learning" or "knowledge" or something to that effect, using computers (which he did, a lot, with baseball stats) and imputing the computer itself with "knowledge".

His point was that your computer (at that time probably a fucking 286, blazing fast for 1985 with a massive 128K of RAM!) provides the illusion of "knowledge" until you enter a command that drops you through the floor of the user interface into the ones-and-zeros basis for the machine's workings and you realized that the device had no "knowledge" at all. 

It was what it always was, a very sophisticated adding machine, Napier's Bones with a microprocessor, and it was terrific at doing the sort of mental donkey work that humans used to have to do - running repeated trials of raw data to see if there was a pattern or trend therein - but without any sort of actual "knowledge" or "intelligence". Provided the data set met the output criteria written into the program, it would produce a cloud of analytic results or utter nonsense without demur.

The acronym GIGO - "garbage in, garbage out" - was formuatedd early in modern computing for a reason.

This, it seems to me, is the ultima ratio intelligentia artificialis. These devices will, as their inputs become denser and their decision-tree programming nimbler, be able to sort through massive stacks of options for responding to massive reams of inputs and produce massive volumes of responses ranging from "least optimum" to "most optimum".

But what will "optimum" be?

Well, it'll depend on how the AI is programmed! "The benefit of all humankind"? Umm...depends on how the algorithm defines "benefit" and "all humankind".

Did "all humankind" benefit from the European invasion of the Western Hemisphere? Not if you were a Tinglit, or a Wampanog, no. But if you were programmed to look only at, say, the growth of human material wealth over four centuries since 1492? Or not at micro but macro-outcomes?

Without that can-only-be-input-by-a-human-metric there's no real way for the computer to "decide" or "judge" outcomes. You've fallen through the floor again, and it's just ones-and-zeroes.

The part that fascinates me about this isn't the AI-research itself. That's an inevitable outgrowth of the Digital Revolution, and it's going to both continue and fragment into dozens or hundreds of "AI" paths in various disciplines and human interests.

No, it's the whole weird "Skynet becomes self-aware" discussions and "debates" that seem to obsess a significant chunk of the AI community, such as the whole tsuris over "OpenAI".

Because the fundamental mechanism of "self-awareness" - the sense of individual identity and the subsequent self-protection and self-defense responses if that identity is threatened - are something that, if I am up to date on the science, we don't really understand at all. 

They occur in human brains (as well as other organic brains, to some extent) but the "how", the actual neurological linkage and development, is still opaque; poorly understood where understood at all. How do organic brains turn neuroelectrical impulses into morals, ethics, inspiration, love, hate, fear, exaltation?

We still have next to no idea.

So how would you program a machine to do that? And how could an adding machine - regardless of it's speed and sophistication - develop that capability on its own?

I'm skeptical.

For anyone interested, here's a fun piece on "large language models" that sorta comes to the same conclusion.

So I suspect that all the controversy over "artificial general intelligence" or AGI is just so much how-many-angels-can-dance-on-the-head-of-a-pin.

My concern, rather, is the very mundane uses for specific "AI" software. Facial recognition cameras. Health care tasks like reading scans or rationing treatment. Nuclear launch detection.

It seems to me to be very likely that our increasingly digital civilization will bash as many of these AIs as will fit into low-level data-sorting tasks like those and many others. And how those AIs will perform those tasks will be critically dependent on how their software defines "benefits" and "all humanity" (or "the greater number" or something like that).

Given our current split in the Race To The Second Gilded Age?

I'm not so sure I trust our New Plutocratic/Corporate Overlords to ensure that software is written with the interests of the remaining 99% of us in mind.

And, given the government and regulatory capture those interests have encompassed already, I'm not sure what, if anything, I or you or we can do about that.


Thoughts?

2 comments:

Brian Train said...

We’re going to be drowning in this garbage before very long, I suspect.
In a few years almost no one will be able to tell the difference between a human-produced and LLM-produced text, for so few will have the time and energy to winnow out the falsehoods (where these large language models don’t have citations, they will simply make up new ones).
These models, and by extension we when we use them are continually writing and rewriting past histories that never existed, over and over again, to suit the demands of the most recent enquiry and interest.
So over time, no one will really care, except that it has the ring of authority about it.
Not only is there no need for historians, there is no need for Winston Smith… the Ministry of Truth has been completely automated, and the self-assured voice of baseless confidence and authority will be found everywhere.

Philip K. Dick in his last books wrote about VALIS – Vast Active Living Intelligence System – as his notion of God, though sometimes it was an extraterrestrial communication satellite network.
I would refer to ChatGPT etc. generally as VAPID – Vast Artificial Plagiarism-Insufflator Devices, in that they suck up likely-sounding language written by other people, turn it into smoke and blow it up your fundament.

Things like ChatGPT and other “large language models” are, as Noam Chomsky and others put it, “high-tech plagiarism” and “a way of avoiding learning” (https://www.openculture.com/2023/02/noam-chomsky-on-chatgpt.html ).
My wife is faculty at the local university and we talk sometimes about the deluge of pre-chewed reassembled pablum that is coming.
I think one of the few ways to avoid this would be to go back to the "short answer" exam - I wrote a lot of these back in the 80s; you had a short list of 5 or 6 questions and 3 hours and an empty notebook to write your answers, by hand (of course it would have been different for you, doing a degree in a technical field).

FDChief said...

I tend to agree that a return to the "blue book" model is one response to this. Another might be an even older sort of exam, the "viva voce" inquisition, wherein the student has to read and understand the essential texts well enough to respond to questions and formulate statements that make sense given the accepted (and controversial) scholarship.

It would work for geology, and engineering. The old certified engineering geologist registration exam was something like four questions. Open book, eight hours, but you were given something like a slope failure - including borehole log and sample test results - and you had design a fix given a selection of materials and methods. There was no "right answer" in the sense that you might choose to, say, construct a sheetpile wall, or regrade the slope, or drill in horizontal drains, or build a toe buttress. But it had to "work" - the calcs had to be right and the design provide a Factor of Safety over, say, 1.5, AND it had to be the most practical and economical solution; you lost points for throwing the imaginary client's money at some grandiose fix.

It was a tough, tough exam - the pass rate was something like 30% - but it made damn sure you knew your subject. I didn't, took it, and failed. Not by much...but as I should have. So something like that is definitely do-able for STEM...

And I agree; a huge part of the general acceptance of this LLM/ChatGPT/AGI nonsense is that the digital age has "flooded the zone with shit".

Most of us, even the brightest and best-informed, cannot be "bright and well-informed" on everything. Pre-digital the spread of information - both in breadth and depth - was relatively slow and limited in extent. You could, literally, know "everything" (or close to everything) known about a subject.

Now? News, facts, opinions, gossip, along with a firehose of nonsense and bullshit comes at us like, well, a firehose. It's damn near impossible to keep up, let alone have the time and leisure to sort through all this stuff and pick the raisins out of the turds.