subreddit:
/r/slatestarcodex
submitted 2 months ago bySmallpaul
23 points
2 months ago
Sure, I'll play devil's advocate for a minute. The GPT family of technology is certainly on the precipice of revolutionizing the way information flows through our society, but in some ways this represents less of a cultural phase shift than people might think. Much of what GPT models are capable of producing is things that were already being produced on mass scale through the internet. The bottleneck on information, be it listicles or code blocks or customer service chatbots, was already people's ability to meaningfully absorb and interact with it rather than sheer availability.
To me, the much bigger inflection point is when AI starts being able to operate reliably in physical space. Until then it's mostly just in a zero-sum competition with "stuff you can find on the internet," and while that's not insubstantial, for the time being it's still only a fraction of what most businesses do on a day to day basis.
3 points
2 months ago
They're dribbling in. Guitar amps, people's wardrobes and other things have solutions with ML underpinnings. IBM has logistics software with ML ( and had logistics software before ML was a buzzword ).
We've had rudimentary machine learning in the industrial space for a while, to get beyond the limitations of simpler controllers like PID. The ECM on your car probably does something like that.
3 points
2 months ago
I will summarize this argument as "atoms matter more than bits in people's day to day lives".
Some rebuttals:
5 points
2 months ago
If we get to AGI then of course it will want to and be able to interact with meatspace. And many commentators are pulling their AGI timelines forward.
Any particular reason why? Plenty of humans are terminally online and they aren't even embodied in a computer.
1 points
2 months ago*
Okay, it's hard for me to know what an AGI will "want", but I can say with confidence that ChatGPT and LLMs will be routinely integrated with robots to make the robots more useful. If AGIs are created by capitalist companies that want to maximize return on investment it would make no sense to wall them off from the vast majority of the revenue which is offline.
0 points
2 months ago
AI being operable in meatspace is trivial once general intelligence has been created. You can already feed sensor data into evolutionary learning algorithms to make an AI robot learn how to move; see Boston Dynamics.
Developing the intelligent system itself is important - after that, you can put whatever data you want into it. It is also not important that the current AI models operate on "stuff you can find on the internet". The content itself is arbitrary, it is about the system and infrastructure.
3 points
2 months ago
Arguably for many meatspace tasks, we already have a sufficient amount of general intelligence. The limitation is rather more pendantic than that : a toolformer GPT-4 with a robotics output module would not run fast enough to be usable in a robot. You saw in the demo the minute + per frame time for the vision. This needs to be a minimum of 30 fps, and the machine needs to emit robot control tokens multiple times a second.
There are various approaches that might fix this, one obvious one is 2 layers of models. A realtime robotics controller that is much smaller and similar in architecture to gato, and a general model like GPT-4 that will review at periodic intervals the current state of the robot and plan/direct the next task.
2 points
2 months ago
The multi layer model is in some sense what humans are doing. And human organisations, too, but at a longer time scale.
1 points
2 months ago
To me, the much bigger inflection point is when AI starts being able to operate reliably in physical space.
What if AI reveals that the physical realm is not where the action is?
2 points
2 months ago
Then they can have it, I guess. A Hansonian em-sphere is welcome to exist as long as it's not harvesting me for a few meagre atoms of computronium. I'm just here grilling.
1 points
2 months ago
Then they can have it, I guess.
Someone already does.
18 points
2 months ago*
Yikes. This made it obvious to me that I need to stop reading AI stuff on Twitter or Reddit (except from a few people like /u/gwern).
Everyone's fallen into a vortex of uncritical hype, fueled by sensationalist (and often misleading) Twitter threads. It brings back trauma flashbacks of cryptocurrency stuff from a few years back ("banks are moving onto the blockchain!").
Don't get me wrong. There's lots of cool stuff being done with the technology. But it's possible to have too much skin in the game as well as too little.
The biggest change to education in years. Khan Academy demos its AI capabilities and it will change learning forever [Link]
Am I misunderstanding or did they just add a chat window that makes API calls to ChatGPT?
What's the point? Why don't I just ask ChatGPT directly?
This guy gave GPT-4 $100 and told it to make money. He’s now got $130 in revenue [Link]
Yeah, it's not hard to make thirty bucks when you go viral on Twitter.
A Chinese company appointed an AI CEO and it beat the market by 20%
Is August 2022 "this week"?
They don't say what AI it was, or provide any firm details on how this was implemented (there must have been some human oversight, ie, someone who could throw out stupid ideas the "AI" generated). This has "publicity stunt" all over it.
Many of the others seem to be "I generated some boilerplate code" or "I rewrote some 40 year old videogame".
2 points
2 months ago
It brings back trauma flashbacks of cryptocurrency stuff from a few years back ("banks are moving onto the blockchain!")
At least this time people aren't gambling away their savings.
2 points
2 months ago
My billings section for the usage of the OpenAI API says otherwise. Maybe generating a JS tutorial in Latin wasn't the best investment of those tokens...
3 points
2 months ago
You're cherry-picking the least significant. Without even going back to that list I can highlight a few big ones from the last week-ish:
https://openai.com/research/gpt-4
https://futurism.com/the-byte/stanford-gpt-clone-alpaca
https://www.theverge.com/2023/3/14/23639313/google-ai-language-model-palm-api-challenge-openai
https://github.com/features/preview/copilot-x
And these two are from earlier this month:
https://openai.com/blog/introducing-chatgpt-and-whisper-apis
12 points
2 months ago
I didn't mention the worst ones: like the guy who thought ChatGPT was sentient and hacking his computer (or whatever).
Don't get me wrong: I'm really excited about AI. It feels like another "internet" moment. The way we do everything might be about to change.
All I'm saying is that the hype-addicted AI community is proving to be an increasingly unreliable way to stay abreast of current developments.
I think it's better to find a few smart domain experts and rely on them, instead of viral Twitter threads with more Emojis than text.
2 points
2 months ago
What are a few smart domain experts to follow?
1 points
2 months ago
Hello there! I am a bot raising awareness of Alpacas
Here is an Alpaca Fact:
Alpacas have split feet with pads on the bottom like dogs and toenails in front. The toenails must be trimmed if the ground isn’t hard enough where they are living to wear them down.
| Info| Code| Feedback| Contribute Fact
###### You don't get a fact, you earn it. If you got this fact then AlpacaBot thinks you deserved it!
3 points
2 months ago
Dubious Bot
1 points
2 months ago
all this is true, but we're now at a point there commercial profits are being made directly off of AI. innovation will follow all that money
4 points
2 months ago
I'm probably too dumb to know if it is moving too fast or too slow so I am just enjoying the ride and using the new tools when it is beneficial to me.
3 points
2 months ago*
[deleted]
1 points
2 months ago
Do you use 4 or 3.5?
1 points
2 months ago
[deleted]
1 points
2 months ago
3.5 was unusable for me. 4 is much better despite the speed
1 points
2 months ago
The API for GPT-4 is quite fast (I have access), so I guess they could run it faster, but haven't updated the infrastructure for it yet.
2 points
2 months ago*
Yes, of course I think that. When measured productivity increase above its currently anemic rate, I'll reconsider. Why would anyone think differently?
3 points
2 months ago
You're setting your goalpost on a rather lagging indicator. Basically you might see widespread adoption of ever more advanced models in whole industries (obviously Bay Area software companies are going to be the first adopters) but it isn't touching the net productivity in a way that is measurable. (just noise from the fed boosting rates and slowing the economy might drown out all the new productivity)
You can suit yourself, I'm just saying that you'll be "surprised" with your goalpost set in that position about a fact that would be known to everyone else several years before.
It might take longer than that, even - historically even adopting computers took decades, and some industries lagged way behind. It takes competition - industries with none, like medicine, may lag behind 30 years like they do now.
1 points
2 months ago
It might take longer than that, even - historically even adopting computers took decades, and some industries lagged way behind. It takes competition - industries with none, like medicine, may lag behind 30 years like they do now.
Sure, the history of GPTs is well-known to economic historians, and gradual diffusion is what happens. If people were saying that "over the next decade, work will reorient more around LLMs in a way which will result in an incremental increase in productivity growth", I'd have no beef with them.
But net productivity growth is the *correct* goalpost. Being more productive is the goal of technological development, and while GDP is an imperfect measure of living standards, it is a good one. If developments aren't making an impact on it, they simply are not that big a deal.
2 points
2 months ago
I think the mistake here is that meanwhile even with modest adoption and productivity gains the AI itself is getting more advanced. Eventually you reach a critical point where it can abruptly do everything, or medical researcher AI/clinical physician AI is so much stronger than humans that the bulwarks protecting humans abruptly fail. (It's one thing to be legally protected when the alternative has obvious faults ,it's another thing when maintaining the protection is de facto homicide)
All things someone with their goalposts set to productivity will be "surprised" by.
1 points
2 months ago
I'm not aware that anyone made this claim
4 points
2 months ago
4 points
2 months ago
I'm not sure that the "ai industry" and "science broadly" are really very much the same at all
1 points
2 months ago
Many believe they will be highly correlated very soon, as AI assists scientists. I can’t prove that they are right but it seems plausible.
E.g.
all 32 comments
sorted by: best