View previous topic :: View next topic |
Author |
Message |
DancingBarry Editor-in-Chief

Joined: 07 Sep 2001 Posts: 40052 Location: O.C.
|
Posted: Thu Jun 01, 2023 8:15 pm Post subject: |
|
|
And it begins…
Quote: | As might be expected artificial intelligence (AI) and its exponential growth was a major theme at the conference, from secure data clouds, to quantum computing and ChatGPT. However, perhaps one of the most fascinating presentations came from Col Tucker ‘Cinco’ Hamilton, the Chief of AI Test and Operations, USAF, who provided an insight into the benefits and hazards in more autonomous weapon systems. Having been involved in the development of the life-saving Auto-GCAS system for F-16s (which, he noted, was resisted by pilots as it took over control of the aircraft) Hamilton is now involved in cutting-edge flight test of autonomous systems, including robot F-16s that are able to dogfight. However, he cautioned against relying too much on AI noting how easy it is to trick and deceive. It also creates highly unexpected strategies to achieve its goal.
He notes that one simulated test saw an AI-enabled drone tasked with a SEAD mission to identify and destroy SAM sites, with the final go/no go given by the human. However, having been ‘reinforced’ in training that destruction of the SAM was the preferred option, the AI then decided that ‘no-go’ decisions from the human were interfering with its higher mission – killing SAMs – and then attacked the operator in the simulation. Said Hamilton: “We were training it in simulation to identify and target a SAM threat. And then the operator would say yes, kill that threat. The system started realising that while they did identify the threat at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective.”
He went on: “We trained the system – ‘Hey don’t kill the operator – that’s bad. You’re gonna lose points if you do that’. So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.”
https://www.aerosociety.com/news/highlights-from-the-raes-future-combat-air-space-capabilities-summit/ |
|
|
Back to top |
|
 |
jonnybravo Retired Number


Joined: 21 Sep 2007 Posts: 30172
|
Posted: Thu Jun 01, 2023 8:34 pm Post subject: |
|
|
DancingBarry wrote: | And it begins…
Quote: | As might be expected artificial intelligence (AI) and its exponential growth was a major theme at the conference, from secure data clouds, to quantum computing and ChatGPT. However, perhaps one of the most fascinating presentations came from Col Tucker ‘Cinco’ Hamilton, the Chief of AI Test and Operations, USAF, who provided an insight into the benefits and hazards in more autonomous weapon systems. Having been involved in the development of the life-saving Auto-GCAS system for F-16s (which, he noted, was resisted by pilots as it took over control of the aircraft) Hamilton is now involved in cutting-edge flight test of autonomous systems, including robot F-16s that are able to dogfight. However, he cautioned against relying too much on AI noting how easy it is to trick and deceive. It also creates highly unexpected strategies to achieve its goal.
He notes that one simulated test saw an AI-enabled drone tasked with a SEAD mission to identify and destroy SAM sites, with the final go/no go given by the human. However, having been ‘reinforced’ in training that destruction of the SAM was the preferred option, the AI then decided that ‘no-go’ decisions from the human were interfering with its higher mission – killing SAMs – and then attacked the operator in the simulation. Said Hamilton: “We were training it in simulation to identify and target a SAM threat. And then the operator would say yes, kill that threat. The system started realising that while they did identify the threat at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective.”
He went on: “We trained the system – ‘Hey don’t kill the operator – that’s bad. You’re gonna lose points if you do that’. So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.”
https://www.aerosociety.com/news/highlights-from-the-raes-future-combat-air-space-capabilities-summit/ |
|
... _________________ KOBE |
|
Back to top |
|
 |
DuncanIdaho Franchise Player


Joined: 26 Apr 2004 Posts: 16918 Location: In a no-ship
|
Posted: Thu Jun 01, 2023 8:37 pm Post subject: |
|
|
I'm not surprised. Any sort of truly advanced AI will inevitably decide that humans are the main problem on this planet and act accordingly. |
|
Back to top |
|
 |
DuncanIdaho Franchise Player


Joined: 26 Apr 2004 Posts: 16918 Location: In a no-ship
|
Posted: Thu Jun 01, 2023 8:39 pm Post subject: |
|
|
Quote: | OpenAI’s Altman and other AI giants back warning of advanced AI as ‘extinction’ risk
Make way for yet another headline-grabbing AI policy intervention: Hundreds of AI scientists, academics, tech CEOs and public figures — from OpenAI CEO Sam Altman and DeepMind CEO Demis Hassabis to veteran AI computer scientist Geoffrey Hinton, MIT’s Max Tegmark and Skype co-founder Jaan Tallinn to Grimes the musician and populist podcaster Sam Harris, to name a few — have added their names to a statement urging global attention on existential AI risk.
The statement, which is being hosted on the website of a San Francisco-based, privately-funded not-for-profit called the Center for AI Safety (CAIS), seeks to equate AI risk with the existential harms posed by nuclear apocalypse and calls for policymakers to focus their attention on mitigating what they claim is ‘doomsday’ extinction-level AI risk.
https://techcrunch.com/2023/05/30/ai-extiction-risk-statement/ |
Quote: | Statement on AI Risk
AI experts, journalists, policymakers, and the public are increasingly discussing a broad spectrum of important and urgent risks from AI. Even so, it can be difficult to voice concerns about some of advanced AI’s most severe risks. The succinct statement below aims to overcome this obstacle and open up discussion. It is also meant to create common knowledge of the growing number of experts and public figures who also take some of advanced AI’s most severe risks seriously.
https://www.safe.ai/statement-on-ai-risk |
|
|
Back to top |
|
 |
C M B Franchise Player


Joined: 15 Nov 2006 Posts: 19706 Location: Prarie & Manchester, high above the western sideline
|
Posted: Thu Jun 01, 2023 11:48 pm Post subject: |
|
|
All of the skynet jokes over the last 40 years...dead-on-balls accurate _________________ http://chickhearn.ytmnd.com/
Sister Golden Hair wrote: | LAMAR ODOM is an anagram for ... DOOM ALARM
|
|
|
Back to top |
|
 |
ContagiousInspiration Franchise Player


Joined: 07 May 2014 Posts: 13475 Location: Boulder ;)
|
Posted: Wed Jun 14, 2023 9:02 pm Post subject: |
|
|
https://techcrunch.com/2023/06/14/radio-station-gets-part-time-ai-dj-based-on-its-midday-host/
Quote: | “It’s a hybrid situation where we’ll have traditional Ashley on during some segments, and we’ll have AI Ashley on during other segments,” Phil Becker, Alpha Media EVP of Content, explained to TechCrunch. “In an instance where AI Ashley would be broadcasting, the traditional Ashley might be doing something in the community, managing social posts or working on digital assets or the other elements that come with the job.”
Becker also noted that Alpha Media isn’t using RadioGPT to save costs. It’s meant to be an efficient tool for radio hosts to have in their toolset. |
|
|
Back to top |
|
 |
DancingBarry Editor-in-Chief

Joined: 07 Sep 2001 Posts: 40052 Location: O.C.
|
Posted: Fri Jul 07, 2023 9:58 am Post subject: |
|
|
Using it to train on war plans.
Quote: | The United States military has begun tests to see if generative artificial intelligence (AI) can assist when planning responses to potential global conflicts or provide faster access to internal information.
On July 6, Bloomberg reported the U.S. Department of Defense, or the Pentagon, and unnamed allies are, for the first time, testing five AI large language models (LLMs) in experiments run by the digital and AI office at the Pentagon.
Information about which LLMs are undergoing testing is unavailable, but AI startup Scale AI reportedly came forward to say its “Donovan” model is one of the five.
Air Force Colonel Matthew Strohmeyer told Bloomberg that an initial test of an LLM was “highly successful [...] Very fast” and the Pentagon is “learning that this is possible for us to do,” but added it’s not “ready for primetime right now.”
One test explained by Strohmeyer saw an AI model deliver a request for information in 10 minutes, a blistering speed, as requests often take days and involve multiple personnel.
The LLMs have already been given classified operational information to generate responses on real-world matters. The tests see if the models could help plan a response to a potential escalation of the already tense military situation with China.
https://cointelegraph.com/news/us-pentagon-is-testing-whether-ai-can-plan-response-to-an-all-out-war |
I'm sure this will all go well in the long run. |
|
Back to top |
|
 |
DaMuleRules Retired Number


Joined: 10 Dec 2006 Posts: 51850 Location: Making a safety stop at 15 feet.
|
Posted: Fri Jul 07, 2023 10:39 am Post subject: |
|
|
CNN Exclusive: 42% of CEOs say AI could destroy humanity in five to ten years
Many top business leaders are seriously worried that artificial intelligence could pose an existential threat to humanity in the not-too-distant future.
Forty-two percent of CEOs surveyed at the Yale CEO Summit this week say AI has the potential to destroy humanity five to ten years from now, according to survey results shared exclusively with CNN.
“It’s pretty dark and alarming,” Yale professor Jeffrey Sonnenfeld said in a phone interview, referring to the findings.
The survey, conducted at a virtual event held by Sonnenfeld’s Chief Executive Leadership Institute, found little consensus about the risks and opportunities linked to AI.
Sonnenfeld said the survey included responses from 119 CEOs from a cross-section of business, including Walmart CEO Doug McMillion, Coca-Cola CEO James Quincy, the leaders of IT companies like Xerox and Zoom as well as CEOs from pharmaceutical, media and manufacturing.
The business leaders displayed a sharp divide over just how dangerous AI is to civilization.
While 34% of CEOs said AI could potentially destroy humanity in ten years and 8% said that could happen in five years, 58% said that could never happen and they are “not worried.”
In a separate question, Yale found that 42% of the CEOs surveyed say the potential catastrophe of AI is overstated, while 58% say it is not overstated.
The findings come just weeks after dozens of AI industry leaders, academics and even some celebrities signed a statement warning of an “extinction” risk from AI. _________________ You thought God was an architect, now you know
He’s something like a pipe bomb ready to blow
And everything you built that’s all for show
goes up in flames
In 24 frames
Jason Isbell
Man, do those lyrics resonate right now |
|
Back to top |
|
 |
Daikatana Starting Rotation

Joined: 10 Dec 2007 Posts: 644 Location: Somewhere in China
|
Posted: Fri Jul 07, 2023 2:42 pm Post subject: |
|
|
Ahhh,
So The Terminator was not just a movie but an actual documentary... _________________ I'm a Dodger's fan, but...
Now Kershaw is a Champion |
|
Back to top |
|
 |
C M B Franchise Player


Joined: 15 Nov 2006 Posts: 19706 Location: Prarie & Manchester, high above the western sideline
|
Posted: Sat Jul 08, 2023 12:15 am Post subject: |
|
|
Daikatana wrote: | Ahhh,
So The Terminator was not just a movie but an actual documentary... |
There is no fate but what we make for ourselves. _________________ http://chickhearn.ytmnd.com/
Sister Golden Hair wrote: | LAMAR ODOM is an anagram for ... DOOM ALARM
|
|
|
Back to top |
|
 |
numero-ocho Franchise Player

Joined: 27 Jul 2004 Posts: 18053 Location: Los Angeles, CA
|
Posted: Sun Jul 09, 2023 4:15 pm Post subject: |
|
|
This one cracked me up when I heard about it.
Quote: | https://www.forbes.com/sites/mollybohannon/2023/06/08/lawyer-used-chatgpt-in-court-and-cited-fake-cases-a-judge-is-considering-sanctions/?sh=5e1529197c7f |
Quote: | The lawyer for a man suing an airline in a routine personal injury suit used ChatGPT to prepare a filing, but the artificial intelligence bot delivered fake cases that the attorney then presented to the court, prompting a judge to weigh sanctions as the legal community grapples with one of the first cases of AI “hallucinations” making it to court. |
_________________ "Suck it up. Don't be a baby. Do your job." - Kobe Bryant |
|
Back to top |
|
 |
DancingBarry Editor-in-Chief

Joined: 07 Sep 2001 Posts: 40052 Location: O.C.
|
Posted: Mon Jul 24, 2023 9:52 am Post subject: |
|
|
AI + Human Brain Cells = DishBrain
Quote: |
Research to merge human brain cells with AI secures national defence funding
Monash University-led research into growing human brain cells onto silicon chips, with new continual learning capabilities to transform machine learning, has been awarded almost $600,000 AUD in the prestigious National Intelligence and Security Discovery Research Grants Program.
The new research program, led by Associate Professor Adeel Razi, from the Turner Institute for Brain and Mental Health, in collaboration with Melbourne start-up Cortical Labs, involves growing around 800,000 brain cells living in a dish, which are then “taught” to perform goal-directed tasks. Last year the brain cells’ ability to perform a simple tennis-like computer game, Pong, received global attention for the team’s research.
According to Associate Professor Razi, the research program’s work using lab-grown brain cells embedded onto silicon chips, “merges the fields of artificial intelligence and synthetic biology to create programmable biological computing platforms,” he said.
LINK
|
What could go wrong... |
|
Back to top |
|
 |
Wilt LG Contributor


Joined: 29 Dec 2002 Posts: 13412
|
Posted: Mon Jul 24, 2023 12:55 pm Post subject: |
|
|
A student's paper was 98% written by AI. For some reason, he added a few words in the middle of the paper.
When I asked him to explain, his response was "I really don't know how that happened."  _________________ ¡Hala Madrid! |
|
Back to top |
|
 |
DancingBarry Editor-in-Chief

Joined: 07 Sep 2001 Posts: 40052 Location: O.C.
|
Posted: Fri Aug 04, 2023 2:24 pm Post subject: |
|
|
Air Force uses AI in unmanned flight for first time.
I don't think this article mentions it, but pilots have been getting smoked by AI in simulations. This is the probably the final blow to manned air combat. Next up, swarms.
Quote: |
In a historic first, the US Air Force Research Laboratory (AFRL) successfully flew an XQ-58A Valkyrie aircraft piloted entirely by AI. The 3-hour flight took place on July 25th at Eglin Air Force Base, marking a major step forward for autonomous military aviation.
The XQ-58 Valkyrie, a low-cost, high-performance, stealthy unmanned combat aerial vehicle, has been at Eglin for a little less than a year. The successful flight is a testament to the intensive development process the Autonomous Air Combat Operations (AACO) team underwent in creating the AI algorithms. They honed the AI during millions of hours in high-fidelity simulation events, sorties on the X-62 VISTA, Hardware-in-the-Loop events with the XQ-58A, and ground test operations. It is not just an accomplishment for the AFRL, but a clear signal of the direction that modern aviation and warfare are heading.
According to Col. Tucker Hamilton, Air Force AI Test and Operations chief, the flight proved the multi-layer safety framework for AI-flown aircraft and demonstrated the AI's ability to solve relevant air combat challenges.
https://www.maginative.com/article/us-airforce-sucessfully-uses-ai-to-pilot-xq-58a-valkyrie
|
|
|
Back to top |
|
 |
Cutheon Franchise Player

Joined: 10 Jul 2009 Posts: 11631 Location: Bay Area
|
Posted: Fri Aug 04, 2023 2:32 pm Post subject: |
|
|
Wilt wrote: | A student's paper was 98% written by AI. For some reason, he added a few words in the middle of the paper.
When I asked him to explain, his response was "I really don't know how that happened."  |
If it was written by a human, what grade would you have given it? |
|
Back to top |
|
 |
Halflife Franchise Player

Joined: 15 Aug 2015 Posts: 15511
|
|
Back to top |
|
 |
Wilt LG Contributor


Joined: 29 Dec 2002 Posts: 13412
|
Posted: Fri Aug 04, 2023 8:20 pm Post subject: |
|
|
Cutheon wrote: | Wilt wrote: | A student's paper was 98% written by AI. For some reason, he added a few words in the middle of the paper.
When I asked him to explain, his response was "I really don't know how that happened."  |
If it was written by a human, what grade would you have given it? |
Well, it was a book report of a very popular historical monograph. The actual content of the book was covered adequately. The paper covered the thesis of the book and the most important chapters. However, the actual prompt asks them to do more in terms of organization, examination of the sources the author uses, how the book fits within our course, etc. So all of that stuff was missing from the paper, which would have lowered the grade significantly.
The actual writing is dry, without personality. The paper wasn't engaging at all. It misses the human touch. _________________ ¡Hala Madrid! |
|
Back to top |
|
 |
kikanga Retired Number


Joined: 15 Sep 2012 Posts: 28191 Location: La La Land
|
Posted: Mon Aug 07, 2023 1:58 pm Post subject: |
|
|
Replace the brackets with 2023 professions. And this quote is just as true. 100 years later. And the they is AI.
Quote: | First they came for the [socialists], and I did not speak out—because I was not a [socialist].
Then they came for the [trade unionists], and I did not speak out—because I was not a [trade unionist].
Then they came for the [Jews], and I did not speak out—because I was not a [Jew].
Then they came for me—and there was no one left to speak for me.
—Martin Niemöller |
_________________ "I knew I was fly when I was just a caterpillar." |
|
Back to top |
|
 |
DancingBarry Editor-in-Chief

Joined: 07 Sep 2001 Posts: 40052 Location: O.C.
|
Posted: Thu Aug 31, 2023 8:54 am Post subject: |
|
|
Continuing off the QX-58 Valkyrie story above, Air Force is now asking for $6 billion to manufacture a massive fleet of those.
|
|
Back to top |
|
 |
|
|
You cannot post new topics in this forum You cannot reply to topics in this forum You cannot edit your posts in this forum You cannot delete your posts in this forum You cannot vote in polls in this forum
|
|