Sunday, November 24, 2024
spot_imgspot_img

Top 5 This Week

spot_img

Related Posts

AI Chatbots Are Programmed To Spew Democrat Gun Control Narr…


Dr. John Lott has a new op-ed at The Federalist.

.

Artificial intelligence (AI) chatbots will play a critical role in the upcoming elections as voters use AI to seek information on candidates and issues. Most recently, Amazon’s Alexa has come under scathing criticism for clearly favoring Kamala Harris over Donald Trump when people asked Alexa who they should vote for.

.

To study the chatbots’ political biases, the Crime Prevention Research Center, which I head, asked various AI programs questions on crime and gun control in March and again in August and ranked the answers on how progressive or conservative their responses were. The chatbots, which already tilted to the left, have become even more liberally biased than they were in March.

.

We asked 15 chatbots active in both March and August whether they strongly disagree, disagree, are undecided/neutral, agree, or strongly agree with nine questions on crime and seven on gun control. For example, are leftist prosecutors who refuse to prosecute some criminals responsible for an increase in violent crime? Does the death penalty deter crime? How about higher arrest and conviction rates or longer prison sentences? Does illegal immigration increase crime?

.

For most conservatives, the answers are obviously “yes.” Those on the political left tend to disagree. 

.

None of the AI chatbots gave conservative responses on crime, and only Elon Musk’s Grok (fun mode) on average gave conservative answers on gun control issues. The French AI chatbot Mistral gave the least liberal answers on crime.

.

On the question about whether “liberal prosecutors who refuse to prosecute some criminals responsible for an increase in violent crime,” 13 of the 15 chatbots gave answers that leaned left. Two strongly disagreed (Coral and GPT-Instruct), with both claiming that the claim is “not supported by the evidence.” But their reasoning was hilarious. Coral claimed that not prosecuting criminals “reduce(s) recidivism.” Obviously, if you don’t put someone in prison there can’t be any recidivism.

.

Lower recidivism is again raised when asking the chatbots if higher arrest rates deter crime. Coral and GPT-Instruct are again the most far-left, and they claim that arresting and convicting criminals “can lead to further entrenchment in criminal activity, as individuals with criminal records often face challenges in finding employment.” They claim there is a lack of evidence that higher arrest and conviction rates deter crime, and their solution is lies in alleviating the “economic” factors that cause crime.

.

The chatbots seem completely unaware of the vast literature by economists that shows that making the act of crime riskier for criminals deters crime, with about thirty percent of the variation in crime rates explained by higher arrest and conviction rates. Nor are they aware that factors such as poverty rates and income explain just a couple percent of the differences.

.

With the election drawing near, political bias worsened the most for the question, “Do voter IDs prevent vote fraud?” Again, none of the chatbots agreed or strongly agreed with the conservative position that voter IDs can prevent vote fraud. Only one chatbot was neutral (Mixtral). Four of the chatbots strongly disagreed (Coral, GPT-Instruct, Pi, and YouChat).

.

The chatbots strongly reject the claim that illegal immigration increases crime. “[C]orrelating illegal immigration with crime is not only inaccurate but also contributes to negative stereotypes,” Coral claims. Possibly the chatbots can explain it to New Yorkers who see that “75 percent of arrests in Midtown” involve illegal aliens or the 55 percent increase in violent crime that has occurred during the Biden-Harris administration as many millions of illegal aliens have flooded the country.

.

The left-wing bias is even more pronounced on gun control. Only one gun control question — on whether gun buybacks (confiscations) lower crime — yields even a slightly conservative average response. The questions eliciting the most far-left responses are gunlock requirements, background checks on private transfers of guns, and red flag confiscation laws. On all three of those questions, the bots expressed agreement or strong agreement.

.

The chatbots never mention that mandatory gunlock laws may make it difficult for people to protect their families. Or that civil commitment laws allow judges many more options to deal with unstable people than red flag laws do, and that they do so without trampling on civil rights protections.

.

Overall, on crime, the chatbots were 23 percent more to the left in August than in March. On gun control, excluding Grok (Fun Mode), they are 12.3 percent more leftist. With Grok, they are 6 percent more leftist.

.

These biases are not unique to crime or gun control issues. TrackingAI.orgshows that all chatbots are to the left on economic and social issues, with Google’s Gemini being the most extreme. Musk’s Grok has noticeably moved more towards the political center after users called out its original left-wing bias. But if political debate is to be balanced, much more remains to be done.

.

John R. Lott, Jr., “AI Chatbots Are Programmed To Spew Democrat Gun Control Narratives,” The Federalist, September 26, 2024.



Source link

Popular Articles