California Dreaming Again

By Jeff Harding   |   June 18, 2024

Since I criticize the great state of California for their policies that don’t work, I thought I would catch up on some recent news that caught my attention. 

Minimum Wage

California fast food restaurant chains now have to pay their employees $20 per hour. My criticism of these new minimum wage laws said that they are set too high and will cause unemployment and business failures. This is the result if you raise wages higher than what an employee produces. 

California’s Rubio’s Coastal Grill filed for bankruptcy the other day, just two months after the new minimum wage law went into effect (April 1). They just closed 48 “underperforming” restaurants in California and now they are down to 86. They blamed the rising cost of doing business here. I don’t think it’s a coincidence that they folded mainly due to higher labor costs.

Rubio’s is not alone. Businesses in the fast food end of the restaurant business just can’t arbitrarily raise prices to offset labor costs, especially when food costs and insurance costs here are also high. Huge chains like Chipotle can still thrive because they have 3,381 restaurants all over the country and can better offset costs and prices on a national basis. Small California chains like Rubio’s can’t compete with that.

Deepfake Regulation

I’m sure you have heard of deepfakes, the use of artificial intelligence to manipulate images, text, and videos that aren’t made with the consent of the person or institution depicted in the images. With the right technology a clever programmer can use AI to make anyone say anything. Imagine me extolling the virtues of socialism. 

Our legislators in Sacramento think we need protection from deepfakes. They proposed the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act(SB 1047) which authorizes the government to create the “Frontier Model Division,” that has vast power over those who run AI technologies. The purpose of this legislation is to allow the government to shut down AI systems if they have “hazardous capabilities.”

This gets kind of complicated. The AI programs you have heard about, such as ChatGPT, are systems that gobble all the data that it can process from the internet and then are trained by technicians and certain algorithms on how to apply this data in a way that appears to have been done intelligently. These are called “large language models.” 

If you have used ChatGPT you are probably amazed at how it can respond to your inquiries and requests in a way that mimics human responses. It can analyze raw data, write reports, compose poetry, and even create unique images. It is being widely used by companies who adapt it to their specific business models.

Just so you know, I write all my own stuff but I occasionally use it for research. I carefully review it to verify such content.

I asked ChatGPT to analyze SB 1047. What I was after was a summary and analysis of the legislation and the danger of censorship of content.

Here is a summary of its summary:

SB 1047 is aimed at ensuring that the development and deployment of advanced AI models in California are conducted in a secure, transparent, and ethical manner. It introduces regulatory measures to manage the risks associated with AI, mandates public reporting, and establishes oversight mechanisms to enforce compliance. The bill reflects a proactive approach to balancing innovation with safety and public interest.

Sounds sort of reasonable, but it creates a legal framework to regulate the development of AI programs to assure that they are designed in ways that “benefit society” by “aligning with the public welfare” and don’t negatively impact things like privacy, security, and “social equity.”

The bill requires annual reports from AI developers certifying that their programs do not have “hazardous capability.” Certifications would need to be signed under penalty of perjury, a potential crime if the certifications are determined to be misleading or false. 

They say they’ll make sure it doesn’t inhibit innovation. And they are pretty sure it won’t lead to censorship. The way ChatGTP put it: “SB 1047 may aim to protect against the harmful effects of misinformation without necessarily censoring legitimate speech.” But even ChatGPT was not so naïve to understand that: “Critics of SB 1047 might argue that it could set a precedent for broader content regulation beyond just deepfakes, potentially infringing on free speech rights.” 

I would take ChatGPT one step further. It is censorship. Who decides what’s a deepfake? What if Biden’s campaign creates a video speech approved by Biden that is fake in that Joe delivers his speech smoothly, confidently, without pauses, and his usual flubs. All videos of Joe’s famous imperfections could be processed to give the impression that the guy is on top of things, hiding his flops and faux pas. Is that a deepfake?

Someone has to decide what is “harmful” fake content and what isn’t. In a political system there is too great an opportunity for misuse. I also think it will inhibit innovation by allowing government to dictate how AI will be used. Slippery slope, that. I don’t trust them and history is on my side.  


You might also be interested in...