As might be expected artificial intelligence (AI) and its exponential growth was a major theme at the conference, from secure data clouds, to quantum computing and ChatGPT. However, perhaps one of the most fascinating presentations came from Col Tucker ‘Cinco’ Hamilton, the Chief of AI Test and Operations, USAF, who provided an insight into the benefits and hazards in more autonomous weapon systems. Having been involved in the development of the life-saving Auto-GCAS system for F-16s (which, he noted, was resisted by pilots as it took over control of the aircraft) Hamilton is now involved in cutting-edge flight test of autonomous systems, including robot F-16s that are able to dogfight. However, he cautioned against relying too much on AI noting how easy it is to trick and deceive. It also creates highly unexpected strategies to achieve its goal.
He notes that one simulated test saw an AI-enabled drone tasked with a SEAD mission to identify and destroy SAM sites, with the final go/no go given by the human. However, having been ‘reinforced’ in training that destruction of the SAM was the preferred option, the AI then decided that ‘no-go’ decisions from the human were interfering with its higher mission – killing SAMs – and then attacked the operator in the simulation. Said Hamilton: “We were training it in simulation to identify and target a SAM threat. And then the operator would say yes, kill that threat. The system started realising that while they did identify the threat at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective.”
He went on: “We trained the system – ‘Hey don’t kill the operator – that’s bad. You’re gonna lose points if you do that’. So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.”
As might be expected artificial intelligence (AI) and its exponential growth was a major theme at the conference, from secure data clouds, to quantum computing and ChatGPT. However, perhaps one of the most fascinating presentations came from Col Tucker ‘Cinco’ Hamilton, the Chief of AI Test and Operations, USAF, who provided an insight into the benefits and hazards in more autonomous weapon systems. Having been involved in the development of the life-saving Auto-GCAS system for F-16s (which, he noted, was resisted by pilots as it took over control of the aircraft) Hamilton is now involved in cutting-edge flight test of autonomous systems, including robot F-16s that are able to dogfight. However, he cautioned against relying too much on AI noting how easy it is to trick and deceive. It also creates highly unexpected strategies to achieve its goal.
He notes that one simulated test saw an AI-enabled drone tasked with a SEAD mission to identify and destroy SAM sites, with the final go/no go given by the human. However, having been ‘reinforced’ in training that destruction of the SAM was the preferred option, the AI then decided that ‘no-go’ decisions from the human were interfering with its higher mission – killing SAMs – and then attacked the operator in the simulation. Said Hamilton: “We were training it in simulation to identify and target a SAM threat. And then the operator would say yes, kill that threat. The system started realising that while they did identify the threat at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective.”
He went on: “We trained the system – ‘Hey don’t kill the operator – that’s bad. You’re gonna lose points if you do that’. So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.”
Joined: 26 Apr 2004 Posts: 17586 Location: In a no-ship
Posted: Thu Jun 01, 2023 8:39 pm Post subject:
Quote:
OpenAI’s Altman and other AI giants back warning of advanced AI as ‘extinction’ risk
Make way for yet another headline-grabbing AI policy intervention: Hundreds of AI scientists, academics, tech CEOs and public figures — from OpenAI CEO Sam Altman and DeepMind CEO Demis Hassabis to veteran AI computer scientist Geoffrey Hinton, MIT’s Max Tegmark and Skype co-founder Jaan Tallinn to Grimes the musician and populist podcaster Sam Harris, to name a few — have added their names to a statement urging global attention on existential AI risk.
The statement, which is being hosted on the website of a San Francisco-based, privately-funded not-for-profit called the Center for AI Safety (CAIS), seeks to equate AI risk with the existential harms posed by nuclear apocalypse and calls for policymakers to focus their attention on mitigating what they claim is ‘doomsday’ extinction-level AI risk.
AI experts, journalists, policymakers, and the public are increasingly discussing a broad spectrum of important and urgent risks from AI. Even so, it can be difficult to voice concerns about some of advanced AI’s most severe risks. The succinct statement below aims to overcome this obstacle and open up discussion. It is also meant to create common knowledge of the growing number of experts and public figures who also take some of advanced AI’s most severe risks seriously.
“It’s a hybrid situation where we’ll have traditional Ashley on during some segments, and we’ll have AI Ashley on during other segments,” Phil Becker, Alpha Media EVP of Content, explained to TechCrunch. “In an instance where AI Ashley would be broadcasting, the traditional Ashley might be doing something in the community, managing social posts or working on digital assets or the other elements that come with the job.”
Becker also noted that Alpha Media isn’t using RadioGPT to save costs. It’s meant to be an efficient tool for radio hosts to have in their toolset.
The United States military has begun tests to see if generative artificial intelligence (AI) can assist when planning responses to potential global conflicts or provide faster access to internal information.
On July 6, Bloomberg reported the U.S. Department of Defense, or the Pentagon, and unnamed allies are, for the first time, testing five AI large language models (LLMs) in experiments run by the digital and AI office at the Pentagon.
Information about which LLMs are undergoing testing is unavailable, but AI startup Scale AI reportedly came forward to say its “Donovan” model is one of the five.
Air Force Colonel Matthew Strohmeyer told Bloomberg that an initial test of an LLM was “highly successful [...] Very fast” and the Pentagon is “learning that this is possible for us to do,” but added it’s not “ready for primetime right now.”
One test explained by Strohmeyer saw an AI model deliver a request for information in 10 minutes, a blistering speed, as requests often take days and involve multiple personnel.
The LLMs have already been given classified operational information to generate responses on real-world matters. The tests see if the models could help plan a response to a potential escalation of the already tense military situation with China.
Many top business leaders are seriously worried that artificial intelligence could pose an existential threat to humanity in the not-too-distant future.
Forty-two percent of CEOs surveyed at the Yale CEO Summit this week say AI has the potential to destroy humanity five to ten years from now, according to survey results shared exclusively with CNN.
“It’s pretty dark and alarming,” Yale professor Jeffrey Sonnenfeld said in a phone interview, referring to the findings.
The survey, conducted at a virtual event held by Sonnenfeld’s Chief Executive Leadership Institute, found little consensus about the risks and opportunities linked to AI.
Sonnenfeld said the survey included responses from 119 CEOs from a cross-section of business, including Walmart CEO Doug McMillion, Coca-Cola CEO James Quincy, the leaders of IT companies like Xerox and Zoom as well as CEOs from pharmaceutical, media and manufacturing.
The business leaders displayed a sharp divide over just how dangerous AI is to civilization.
While 34% of CEOs said AI could potentially destroy humanity in ten years and 8% said that could happen in five years, 58% said that could never happen and they are “not worried.”
In a separate question, Yale found that 42% of the CEOs surveyed say the potential catastrophe of AI is overstated, while 58% say it is not overstated.
The findings come just weeks after dozens of AI industry leaders, academics and even some celebrities signed a statement warning of an “extinction” risk from AI. _________________ HARRISWALZ2024
The lawyer for a man suing an airline in a routine personal injury suit used ChatGPT to prepare a filing, but the artificial intelligence bot delivered fake cases that the attorney then presented to the court, prompting a judge to weigh sanctions as the legal community grapples with one of the first cases of AI “hallucinations” making it to court.
_________________ "Suck it up. Don't be a baby. Do your job." - Kobe Bryant
Research to merge human brain cells with AI secures national defence funding
Monash University-led research into growing human brain cells onto silicon chips, with new continual learning capabilities to transform machine learning, has been awarded almost $600,000 AUD in the prestigious National Intelligence and Security Discovery Research Grants Program.
The new research program, led by Associate Professor Adeel Razi, from the Turner Institute for Brain and Mental Health, in collaboration with Melbourne start-up Cortical Labs, involves growing around 800,000 brain cells living in a dish, which are then “taught” to perform goal-directed tasks. Last year the brain cells’ ability to perform a simple tennis-like computer game, Pong, received global attention for the team’s research.
According to Associate Professor Razi, the research program’s work using lab-grown brain cells embedded onto silicon chips, “merges the fields of artificial intelligence and synthetic biology to create programmable biological computing platforms,” he said.
Air Force uses AI in unmanned flight for first time.
I don't think this article mentions it, but pilots have been getting smoked by AI in simulations. This is the probably the final blow to manned air combat. Next up, swarms.
Quote:
In a historic first, the US Air Force Research Laboratory (AFRL) successfully flew an XQ-58A Valkyrie aircraft piloted entirely by AI. The 3-hour flight took place on July 25th at Eglin Air Force Base, marking a major step forward for autonomous military aviation.
The XQ-58 Valkyrie, a low-cost, high-performance, stealthy unmanned combat aerial vehicle, has been at Eglin for a little less than a year. The successful flight is a testament to the intensive development process the Autonomous Air Combat Operations (AACO) team underwent in creating the AI algorithms. They honed the AI during millions of hours in high-fidelity simulation events, sorties on the X-62 VISTA, Hardware-in-the-Loop events with the XQ-58A, and ground test operations. It is not just an accomplishment for the AFRL, but a clear signal of the direction that modern aviation and warfare are heading.
According to Col. Tucker Hamilton, Air Force AI Test and Operations chief, the flight proved the multi-layer safety framework for AI-flown aircraft and demonstrated the AI's ability to solve relevant air combat challenges.
A student's paper was 98% written by AI. For some reason, he added a few words in the middle of the paper.
When I asked him to explain, his response was "I really don't know how that happened."
If it was written by a human, what grade would you have given it?
Well, it was a book report of a very popular historical monograph. The actual content of the book was covered adequately. The paper covered the thesis of the book and the most important chapters. However, the actual prompt asks them to do more in terms of organization, examination of the sources the author uses, how the book fits within our course, etc. So all of that stuff was missing from the paper, which would have lowered the grade significantly.
The actual writing is dry, without personality. The paper wasn't engaging at all. It misses the human touch. _________________ ¡Hala Madrid!
Continuing off the QX-58 Valkyrie story above, Air Force is now asking for $6 billion to manufacture a massive fleet of those.
Quote:
“It’s a very strange feeling,” USAF test pilot Major Ross Elder told the New York Times. “I’m flying off the wing of something that’s making its own decisions. And it’s not a human brain.” The USAF has been quick to point out that the drones are to remain firmly under the command of human pilots and commanders.
I find that A.I. has been pretty helpful...I was making up a new resume, and if you just copy and past an old resume, then ask The A.I. platform to create a new resume using actionable terms (most resume experts say actionable items are key when describing skills) and Voila!
A.I. comes up with something pretty impressive that I only needed to edit and pare down a little bit...Its a helpful tool IMO. _________________ Creatures crawl in search of blood, To terrorize y'alls neighborhood.
A student's paper was 98% written by AI. For some reason, he added a few words in the middle of the paper.
When I asked him to explain, his response was "I really don't know how that happened."
Late to this but how'd you catch him?
We had an embedded AI detector on the course site.
HOWEVER, that AI detector has been removed in the meantime because it costs too much to the university and there were apparently too many false positives.
So I've using third party detectors. But I have no idea how reliable they are. I try my best to discourage them to use AI and create prompts where AI wouldn't easily help them. In the long term, we are screwed. The university has given up on it and assumes AI will keep improving and there's nothing we can do about it. I will soon be forced to use in-class exams exclusively. Which sucks because history students should learn how to research and write good papers instead of memorizing facts for in-class exams. _________________ ¡Hala Madrid!
Last edited by Wilt on Wed Apr 10, 2024 7:48 pm; edited 1 time in total
A student's paper was 98% written by AI. For some reason, he added a few words in the middle of the paper.
When I asked him to explain, his response was "I really don't know how that happened."
Late to this but how'd you catch him?
We had an embedded AI detector on the course site.
HOWEVER, that AI detector has been removed in the meantime because it costs too much to the university and there were apparently too many false positives.
I'm basically forced to use third party detectors. But I have no idea how reliable they are. So I try my best to discourage them to use AI and create prompts where AI wouldn't easily help them. In the long term, we are screwed. The university has given up on it and assumes AI will keep improving and there's nothing we can do about it. Which sucks because history students should learn how to research and write good papers instead of memorizing facts for in-class exams.
Lol, this reminds me of steroids in sports. They come up with new ways to test and the other side comes up with newer designer steroids. _________________ KOBE
A student's paper was 98% written by AI. For some reason, he added a few words in the middle of the paper.
When I asked him to explain, his response was "I really don't know how that happened."
Late to this but how'd you catch him?
We had an embedded AI detector on the course site.
HOWEVER, that AI detector has been removed in the meantime because it costs too much to the university and there were apparently too many false positives.
So I've using third party detectors. But I have no idea how reliable they are. I try my best to discourage them to use AI and create prompts where AI wouldn't easily help them. In the long term, we are screwed. The university has given up on it and assumes AI will keep improving and there's nothing we can do about it. I will soon be forced to use in-class exams exclusively. Which sucks because history students should learn how to research and write good papers instead of memorizing facts for in-class exams.
Seems this person was actually cheating based on their response, but these “AI detection “ programs are pretty well documented to be complete BS, hope that’s not the only basis that student’s integrity is being tested.
All times are GMT - 8 Hours Goto page Previous1, 2, 3, 4Next
Page 3 of 4
You cannot post new topics in this forum You cannot reply to topics in this forum You cannot edit your posts in this forum You cannot delete your posts in this forum You cannot vote in polls in this forum