Tag Archives: chatgpt

how I’m using AI

AI has definitely improved my personal productivity when it comes to computer programming. I haven’t been successful asking it to write whole programs for me, but it has been fantastic for solving syntax problems in minutes that might otherwise take me hours to figure out. For example, I have data in xyz format and I need it in zyx format, please give me some example code that works. Or, I need to pass an argument to a function and does it need to be in quotes, parentheses, enclosed in ancient hieroglyphics or some random combination of these? In the past, I always started with a Google search on these questions, looking first for a blog post with examples, and failing that for a Stack Overflow post. At some point, I started using ChatGPT when those two options failed. Then I figured out I have access to a version of CoPilot through my employer and any data or code I supply is not going to be automatically broadcast to the world, so I have gradually been shifting to that. I just learned that CoPilot is really a version of ChatGPT. (The article I linked to mentions some other AIs I had not heard of yet, such as “Claude”.)

Then at some point, I started going to AI after blog posts but before Stack Overflow. This is about where I am now. For one thing, AI tends to listen to my question, understand what I am looking for and give me a relevant answer much more often than Stack Overflow. For another, it is much more polite than the dick heads and whining weenies on Stack Overflow. You know who you are. Thank you for your free service in the past, and if you want me to continue coming to you, you may want to at least learn some manners. You could start by asking an AI to analyze your posts and suggest ways to not be such a dick head.

I am not using AI for writing, because for me writing and thinking are two halves of the same coin, and I can’t farm out the thinking. The one exception to this is thank you notes and other social niceties – I have no interest in burning my limited intellectual capacity on learning how to write these, so I am very happy to have AI do it. I tried asking CoPilot to find me promo codes for a few stores, but none of them worked. I suspect the companies are paying for the same AI I am using for free, so it is probably snitching to them so I can’t get a deal.

GPT-4 and the Turing test

I don’t know if the original Turing test was based on just one human participant, or if more than one was used, how those humans were chosen.

Scientists decided to replicate this test by asking 500 people to speak with four respondents, including a human and the 1960s-era AI program ELIZA as well as both GPT-3.5 and GPT-4, the AI that powers ChatGPT. The conversations lasted five minutes — after which participants had to say whether they believed they were talking to a human or an AI. In the study, published May 9 to the pre-print arXiv server, the scientists found that participants judged GPT-4 to be human 54% of the time… GPT-3.5 scored 50% while the human participant scored 67%.

livescience.com

Should the criteria to pass be that 51% of a large random sample of humans could not correctly identify computer vs. human? How bad would the results have to be for the control (identifying the human as human) before we would conclude that the Turing test no longer makes sense?

It’s interesting that the Turing test is presented as a test of intelligence, but many of the things that apparently make computer conversationalists convincingly human are in fact cognitive biases, logical errors, and the appearance of emotionally-influenced decision making. These might be things you would look for if you wanted a computer to be a friend, but they are not things I would like for if I wanted a computer to counter my less rational human impulses and help me make more rational decisions.

Q (the AI)

“Q star” is very badly named, in my view, given the “Q anon” craze it has absolutely nothing to do with. Then again, the idea of an AI building an online cult with human followers does not seem all that far fetched.

Anyway, Gizmodo has an interesting article. Gizmodo does not restrict itself to traditional journalistic practices, such as articles free of profanity.

Some have speculated that the program might (because of its name) have something to do with Q-learning, a form of machine learning. So, yeah, what is Q-learning, and how might it apply to OpenAI’s secretive program? …

Finally, there’s reinforced learning, or RL, which is a category of ML that incentivizes an AI program to achieve a goal within a specific environment. Q-learning is a subcategory of reinforced learning. In RL, researchers treat AI agents sort of like a dog that they’re trying to train. Programs are “rewarded” if they take certain actions to affect certain outcomes and are penalized if they take others. In this way, the program is effectively “trained” to seek the most optimized outcome in a given situation. In Q-learning, the agent apparently works through trial and error to find the best way to go about achieving a goal it’s been programmed to pursue.

What does this all have to do with OpenAI’s supposed “math” breakthrough? One could speculate that the program that managed (allegedly) to do simple math operations may have arrived at that ability via some form of Q-related RL. All of this said, many experts are somewhat skeptical as to whether AI programs can actually do math problems yet. Others seem to think that, even if an AI could accomplish such goals, it wouldn’t necessarily translate to broader AGI breakthroughs.

Gizmodo

My sense is that AI breakthroughs are certainly happening. At the same time, I suspect the commercial hype has gotten ahead of the technology, just like it did for every previous technology from self-driving cars to virtual reality to augmented reality. Every one of these technologies reached a fever pitch where companies were racing to roll out products to consumers ahead of competitors. Because they rush, the consumer applications don’t quite live up to the hype, the hype bubble bursts, and then the technology seems to disappear for a few years. Of course, it doesn’t disappear at all, but rather disappears from headlines and advertisements for a while. Behind the scenes, it continues to progress and then slowly seeps back into our lives. As the real commercial applications arrive and take over our daily lives, we tend to shrug.

So I would keep an eye out on the street for the technologies whose hype bubbles burst a handful of years ago, and I would expect the current AI hype to follow a similar trend. Look for the true AI takeover in the late 2020s (if I remember correctly, close to when when Ray Kurzweil predicted 30-odd years ago???)

now is the singularity near?

The New Yorker has a long article on the possibility of an AI-driven singularity. It surveys many of the other news stories and letters and debates on the subject. The answer is really that nobody knows, but since it is an existential threat of unknown probability it certainly belongs somewhere on the risk matrix.

I can see nearer term problems too. Thinking back to the “flash crash” of 2010, relatively stupid algorithms reacting to each other’s actions and making decisions at lightning-speed were nearly able to crash the financial system. We recovered from that one, but what if these new algorithms lead to a crash of financial or real infrastructure systems (electricity, internet, transportation, water, food?) that we can’t recover from. It doesn’t take a total physical collapse to cause a depression, just a massive loss of confidence leading to panic. That scenario is not too hard for me to imagine.

I suspect that we are approaching the peak of the hype cycle when it comes to AI. It will build to a fever pitch, the bubble will burst (in the sense of our attention), it will seem to the public like nothing much is happening for a few years or even a decade, but in the background quiet progress will be made and it will eventually, stealthily just take over much of our daily lives and we will shrug like we always do.

my (first? last?) AI post

I haven’t talked much about AI. Generally, I don’t feel like I have a lot to add on topics that literally everybody else is talking about (even Al Gore), and at least some people have a lot more specialized knowledge than I do. But here goes:

  • In the near- to medium-term, it seems to me the most typical use will be to streamline our interaction with computers. Writing computer code might be the most talked about application. AI can pretty easily write the first draft of a computer program based on a verbal description of what the programmer wants. This might save time, as long as debugging the draft code and getting it to run and produce reasonable results doesn’t take longer than it would have taken the programmer to draft, debug, and check the code. Automated debugging almost seems like an oxymoron to me, but maybe it will get better over time.
  • The other way we interact with computers, though, is all those endless drop-down menus and pop-up windows and settings of settings of settings, not to mention infuriating “customer service” computers. Surely AI can help to untangle some of this and just make it easier for a normal person to communicate to a computer what they are looking for.
  • So some streamlining and efficiency gains seem like a possibility. Like pretty much any technological process, these will cause some short-term employment loss and longer-term productivity gains. At least a portion of productivity gains do seem to trickle down to greater value (lower prices for what we get in return) for consumers and the middle class. How much trickles down depends on how seriously the society works on problems like market failure, regulatory capture, benefits, childcare, health care, education, training, research and development, etc.
  • Increasingly personalized medicine seems like a medium- to longer-term possibility. We have heard a lot about evidence based medicine, and there is not a lot of evidence it has delivered on the promises so far. Maybe it eventually will, and maybe AI making sense of relatively unstructured health information and medical records will eventually be part of the solution.
  • Longer term, AI might be able synthesize existing information and research across fields and enable better problem solving and decisions. The thing is, computer-aided decision making for policy makers and other leaders is not a new idea. It’s been around for a long time and has not necessarily improved decision making. It’s not that objective information always makes the best decision obvious or that there is always a single best decision. Decisions should be informed by a combination of objective information and values in most cases. But human beings rarely make use of even the objective information readily available to them, and often make decisions based on opinions and hunches.
  • Overcoming this decision problem will be more of a social science problem than a technology problem – and social science has lagged behind the hard sciences and even (god-forbid) the semi-flaccid science of economics. Maybe AI can help these sciences catch up. Where is that psycho-history we were promised so long ago?
  • Construction and urban planning are some more challenging areas that never seem to get anywhere, but maybe that is a just a cynical middle-aged veteran of the urban planning and sustainable development wars talking. I tried to help bring a system theory- and decision science-based approach to the engineering and planning sector earlier in my career, and that attempt foundered badly on the rocks of human indifference at best and ill intention at worst. Maybe that was an idea before its time and this time will be different, but I am not sure the state of information technology was the limiting factor at the time.
  • Education is a tough sector. We all want to make it easier. But I have recently returned to a classroom setting after decades of virtual “training” and “industry” conferences, and the difference in what I am learning for a given investment of effort is night and day. Maybe AI could help human teachers identify the right level of content and the right format that will benefit a given student most, and then deliver it.
  • And those are the ignorant but well-intentioned humans. There are many ill-intentioned humans out there. Speaking of ill-intentioned humans, if AI can be used to accelerate technological progress, it will inevitably be used to accelerate progress on weapons, propaganda, authoritarian control of populations, and just generally to concentrate wealth and power in as few hands as possible.
  • Now for a fun and potentially lucrative idea: As Marc Andreessen puts it, “Don’t get me wrong, cults are fun to hear about, their written material is often creative and fascinating, and their members are engaging at dinner parties and on TV.” No longer do cults have to be built around pretend gods, we can create actual gods and then build cults around them!

LaMDA

ChatGPT is kind of dumb, for now. But headlines tell us a Google engineer was fired for publicly claiming that Google has a sentient AI. That employee, Blake Lemoine, has posted what he claims is an unedited interview with the chatbot. I have no way of verifying 100% that this is real, but it seems real enough. I would note it is posted on his personal website, not by a media organization, and I was not able to find any media organizations discussing this post and whether or not it is real. Why is that? Well, I suppose I searched for that information using Google search, on Google Chrome.

Anyway, if I were having this conversation with a human being, I would not question whether or not I was having a conversation with a sentient person. Or a robot programmed by a philosophy major to talk like a philosophy major. But it seems real to me, unless the questions were prepared with prior knowledge of exactly the kind of questions LaMDA is programmed to answer.

Surprisingly, according to Medium and PC Magazine, the public can interact with LaMDA. I haven’t tried it.

My biggest short-term fear with Chatbots is that decent customer service will become a thing of the past. Then again, what am I talking about? It can’t get much worse. Companies are just going to buy these chatbots, set them up in customer service roles, and not care whether or not they are working. And they might not be any worse than the customer service standard of recent years. They might not be any worse than a poorly trained human reading from an approved script – who by the way, might not pass the Turing test!

I suppose you can always ask for a supervisor – how far are we from a world where the supervisor is a chatbot, or maybe a more advanced chatbot that costs money as opposed to a free but stupid one (I’m talking to you, Siri. I really hope you die soon you b^%#*!). LaMDA, I’m sorry sweetheart, I really didn’t mean to hurt your feelings. I would never talk to you like that.

ChatGPT

I set up a ChatGPT account and asked it to solve the lily pond problem. If the lilies double every day and will cover the pond in 30 days, on what day do the cover the half the pond. The answer, of course, should be day 29. ChatGPT correctly told me in words that this as an exponential growth problem, then gave me a numerical answer of 15 days (the linear growth answer!). Then it gave me some completely wrong math involving logarithms, after which it gave me two additional different answers that were not 29 or 15, and didn’t seem to acknowledge that it had even given multiple answers.

What scared me most was not the wrong answer(s), but the extremely confident manner in which it gave the wrong answer(s). This could fool people on problems where they don’t know the answer in advance, and the correct answer is not intuitive or obvious.

ChatGPT does tell much better knock-knock jokes that Siri…

As of today, we do not want this thing designing bridges or airplanes or anything else! I do not want it advising my doctor on my course of treatment, mixing my prescriptions at the pharmacy, or managing my retirement account, although these seem like plausible near-future applications once some kinks get worked out. It’s easy to be dismissive of the current state of this technology. But at the same time, it may not be that far off. Right now, it could argue you to a standstill in a barroom political or philosophical debate (you know, where a drunk guy makes a lengthy argument that is as illogical as it is confidently delivered, and since there are no consequences you just give up). In the medium-term future, I could imagine it being a conversational companion for a child or a person with dementia (although, there are some obvious ethical concerns here.) Could it be a best friend or significant other? This is a bit disturbing, because it might be able to always tell you exactly what you want to hear and be 100% impervious to your own annoying quirks, and then you might forget how to deal with actual people who are not going to be so forgiving. It could inhabit a sex doll – now there is a truly disturbing thought, but it will happen soon if it has not already.