The end of, "Say Fuck January."
I'm sorry, but we have eleven months of, "f%$#," and "sh%$," ahead of us.
I feel like I'm f%$#ing losing the ability to speak.
The Further Adventures of Matthew Saroff,
Itinerant Engineer
The end of, "Say Fuck January."
I'm sorry, but we have eleven months of, "f%$#," and "sh%$," ahead of us.
I feel like I'm f%$#ing losing the ability to speak.
The belief that men do not listen to them. (No, this is not a run up to a crude pun)
It turns out that cats meow more and louder when trying to get the attention of men than when trying to get the attention of women.
I'm not particularly surprised.
As he lectured on animal behavior, Kaan Kerman, an instructor in the psychology department at Bilkent University in Turkey, noticed a pattern. Dog owners tend to confidently interpret their pets’ behavior, he said, “but cat owners are always puzzled.” Compared with dogs, cats have been studied less, partly because they prefer to stay at home.
“If you want to bring cats into the lab,” Dr. Kerman said, “good luck.”
When he and his colleagues asked cat owners for permission to film inside their homes, the response was enthusiastic. “As long as you give us some answers about our cats,” was a common reply. What that study found may not be as welcome among men who care for cats.
In a study published this month in the journal Ethology, the researchers reported that cats meow more frequently when greeting male caregivers. The team hypothesized that men “require more explicit vocalizations to notice and respond to the needs of their cats.” In other words, the researchers are suggesting that many cats have concluded that men don’t always listen, and adjusted their behavior accordingly.
I am sure that many women who have read this story have thought, "No shit, Sherlock."
It appears that Canadian Prime Minister Mark Carney has a little scandal on his hands because of his habit of using British spellings in some of his official documents.
I'm not sure if I should mock the Canadians for such a lame scandal, or envy them for having such a wholesome scandal.
No fishermen are getting murdered from this.
Mark Carney says that amid a fundamental shift to the nature of globalisation, his government will catalyse the growth in both the public and private sector.
But Canadian linguists say that’s a problem.
Language experts have called out the Canadian prime minister’s growing “utilisation” of British spellings in key documents – including the recent federal budget and a press release issued following a meeting with Donald Trump.
Carney, who served as the governor of the bank of England for seven years, appears to have run afoul of Canadian linguistic norms, returning to his home country with a penchant for using ‘s’ instead of ‘z’ – a hallmark of British spellings.
In an open letter chastising the prime minister, six linguists have asked his office, the Canadian government and parliament to stick to Canadian English spelling, “which is the spelling they consistently used from the 1970s to 2025”.
I'm wondering if this is all an elaborate prank.
When Gavin Newsom's office calls Trump deportation Czar Steven Miller a "Fascist Cuck", I have to give credit where credit is due.
Well done.
Billionaires Convince Themselves AI Chatbots Are Close to Making New Scientific Discoveries—Gizmodo, on Travis Kalanic claiming that LLM generatuive AI will be making massive scientific discoveries any day now.
This should not be a surprise, coming from a guy whose skill is political lobbying, raising venture capital money, and breaking the law.
I don't know if the Silly-Con Valley billionaires started off this stupid, of if they have destroyed brain cells from sniffing their own farts once they got rich, but they are morons.
Generative artificial intelligence tools like ChatGPT, Gemini, and Grok have exploded in popularity as AI becomes mainstream. These tools don’t have the ability to make new scientific discoveries on their own, but billionaires are convinced that AI is on the cusp of doing just that. And the latest episode of the All-In podcast helps explain why these guys think AI is extremely close to revolutionizing scientific knowledge.
Travis Kalanick, the founder of Uber who no longer works at the company, appeared on All-In to talk with hosts Jason Calacanis and Chamath Palihapitiya about the future of technology. When the topic turned to AI, Kalanick discussed how he uses xAI’s Grok, which went haywire last week, praising Adolf Hitler and advocating for a second Holocaust against Jews.
There's more in the article, but the fact that the leech who created Uber thinks that it's hunky-dory to use an overgrown Eliza program that has gone full Nazi as the way to enlightenment, something has gone profoundly wrong with him.
The Ukrainians have a great word for that. https://t.co/YKcvZPiMy7 pic.twitter.com/yDvn149fMN
— Lindsay Beyerstein (@beyerstein) June 6, 2025
War Pigs Make No Sense Without Elephants
You may recall that yesterday, I mentioned that flaming war pigs were a real historical thing.
Seeing as how this IS a real thing it seemed to me that there should be some sort of combat event in the Society for Creative Anachronism recreating this.
This would be analogous too the occasional game of Buzkashi, a Central Asian equestrian game, that are conducted every now and then by SCA groups.
I was on the way to an SCA event with Sharon, and said this.
It's basic logic, since war pigs are a weapon specifically targeting elephants, any analogue must find a way to simulate both animals.
As soon as I said this, I saw how absurd of what I said was
I do feel the need that this is not even close to the weirdest thing that I have ever said.
The weirdest thing that I have ever said was probably, "It's not Buffet Time at the Wildebeest."
My children were being taken feral at dinner over a decade ago, and it seemed to me that they resembled a pack of heyenas having a throw down over a recent kill.
This did stop Charlie and Nat from misbehaving, because they were laughing to hard to do anything else.
There may be something seriously wrong with me.
You know, unlike the other 11 months of the year, I don't obscure expletives with with various typographical symbols, "Sh%$," "F%$#," and so forth.
The reason is obvious for anyone who has a modicum awareness of recent history. If you don't, well, your dose of Fukitol® is TOO STRONG. (I miss the elegant simplicity of Netscape's elegant and widely loathed blink tag.)
Contact your doctor of pharmacist.
I am profane by nature, and eleven months of the year, I will obscure this with various typographical symbols, "Sh%$," "F%$#," and so forth.
Ever since the fucking January 6, 2021 insurrection, an event that I thought gave me fucking license to actually say things like shit and fuck, I have reserved January for actual swearing.
While I am sure my reader(s) delicate fucking sensibilities might be triggered by this, so consider this a fucking warning.
I will be fucking swearing all this fucking month, though I will, as always, not use the C-word. That has always crossed a line for me.
Why did I pick January?
First it is a tradition from the 2021 insurrection, and second, given that January is a time for reviewing the previous year, it is difficult to say, "2024 was fucked up and shit," without actually saying, "Fuck" and, "Shit."
As to why have such a month at all? It is because I need this month, because everything is fucked up and shit.(Archive.is link here) Basically, this is people who look at what is going on, and understand the consequences down the road.
Think of the myth of Cassandra, as the author of this piece does:
………
Many of us have been identifying strongly with Cassandra over the last few years. We watch the media downplay and dismiss one threat after another. We endure endless opinion pieces about everything from climate alarmism to coronaphobia. Influencers accuse us of hurting everyone’s mental health. Strangers call us doomers and fearmongers. Our friends and family treat us like we’re paranoid. When we share dozens or even hundreds of studies, they refuse to look at them. They say, “I don’t want to read anything that’ll bring me down.”
“I’m trying to stay positive.”
Americans and Westerners in general are suffering from a pandemic of denial, wishful thinking, and toxic positivity. It impedes us at every turn, on almost every serious issue. It exacerbates our existing anxiety and contributes to our sense of despair about the future of the planet. Here’s the thing:
You’re not a fearmonger.
You have sentinel intelligence.
Sentinel intelligence refers to a special cognitive ability that allows someone to detect threats before anyone else. Richard A. Clarke and R.P. Eddy talk about this trait in their book, Warnings: Finding Cassandras to Stop Catastrophes. They review a number of natural and economic disasters throughout history. As they write, “in each instance a Cassandra was pounding the table and warning us precisely about the disasters that came as promised.” Not only were they ignored, but “the people with the power to respond often put more effort into discounting the Cassandra than saving lives and resources.”
This ability is not uncommon, as the author suggests, nor is it a superpower.
For many people, in our currently dysfunctional society, they simply do not have the time to look at what is going on. They are living from paycheck to paycheck and hanging on by their fingernails.
Those who are in a position where they can observe and draw obvious conclusions understand that acknowledging and reacting to potential threats will result in the loss of your job, and as Upton Sinclair noted, "Upton Sinclair, "It is difficult to get a man to understand something, when his salary depends on his not understanding it."
That's why you have things like CDC directors pretending that Covid is over and the Great Barrington Declarations.
Right or wrong, you get fame and fortune for not addressing future risks.
There is a term that is used in family, "Comyuckle," that generally means lame person.
It's used as a loving insult inside the family.
I thought that it was some sort of Yiddishism that came from my mom, but I could never find anyone outside of the family who used this term.
Well, I did another search today, and I found this mention from the Joy of Yiddish:
While visiting with my 94-year-old mother earlier this year, we were talking about the everyday person on the street. She said, “You know, Chaim Yankel!.” I instantly cracked up as she did as well. My husband who was sitting with us had never heard of Chaim Yankel. Mom and I were both stunned.
I grew up on Long Island and my husband in Queens. Yet he never heard of him. So I did a quick check with others. A good friend from Long Island had also never heard of Chaim Yankel, but my friend from the Bronx knew instantly who he was; she, too, started laughing as she hadn’t heard his name mentioned in a long time.
In his splendid book, The Joys of Yiddish, Leo Rosten gives two definitions of the Yiddish term Chaim Yankel:
1. A nonentity, a nobody, any “poor Joe.”
2. A colloquial, somewhat condescending way of addressing a Jew whose name you do not know — just as “Joe” or “Mac” is sometimes used in English.
So, "Comyuckle," is probably a corruption of Chaim Yankel.
Needless to say, I need to share this with my sibs.
ChatGPT Isn’t ‘Hallucinating’—It’s Bullshitting!—Scientific American
Yes, I know I did not Bowdlerize the swear word, even though it's not January, but it is too good not to quote completely, particularly as a headline for Scientific American.
It's also accurate. The failures of the LLM artificial intelligence are not some sort of bizarre artifact, they are the result of a deliberate decision by ChatGPT and its competitors to bullsh%$ in the hope that they can create the illusion of actual useful intelligence:
Right now artificial intelligence is everywhere. When you write a document, you’ll probably be asked whether you need your “AI assistant.” Open a PDF and you might be asked whether you want an AI to provide you with a summary. But if you have used ChatGPT or similar programs, you’re probably familiar with a certain problem—it makes stuff up, causing people to view things it says with suspicion.
It has become common to describe these errors as “hallucinations.” But talking about ChatGPT this way is misleading and potentially damaging. Instead call it bullshit.
We don’t say this lightly. Among philosophers, “bullshit” has a specialist meaning, one popularized by the late American philosopher Harry Frankfurt. When someone bullshits, they’re not telling the truth, but they’re also not really lying. What characterizes the bullshitter, Frankfurt said, is that they just don’t care whether what they say is true. ChatGPT and its peers cannot care, and they are instead, in a technical sense, bullshit machines.
Frankfurt's essay was On Bullsh%$, and Dave Graeber's Bullsh%$ Jobs, further explored the concept of bullsh%$.
………
This isn’t rare or anomalous. To understand why, it’s worth thinking a bit about how these programs work. OpenAI’s ChatGPT, Google’s Gemini chatbot and Meta’s Llama all work in structurally similar ways. At their core is an LLM—a large language model. These models all make predictions about language. Given some input, ChatGPT will make some prediction about what should come next or what is an appropriate response. It does so through an analysis of enormous amounts of text (its “training data”). In ChatGPT’s case, the initial training data included billions of pages of text from the Internet.
From those training data, the LLM predicts, from some text fragment or prompt, what should come next. It will arrive at a list of the most likely words (technically, linguistic tokens) to come next, then select one of the leading candidates. Allowing for it not to choose the most likely word each time allows for more creative (and more human-sounding) language. The parameter that sets how much deviation is permitted is known as the “temperature.” Later in the process, human trainers refine predictions by judging whether the outputs constitute sensible speech. Extra restrictions may also be placed on the program to avoid problems (such as ChatGPT saying racist things), but this token-by-token prediction is the idea that underlies all of this technology.
Now, we can see from this description that nothing about the modeling ensures that the outputs accurately depict anything in the world. There is not much reason to think that the outputs are connected to any sort of internal representation at all. A well-trained chatbot will produce humanlike text, but nothing about the process checks that the text is true, which is why we strongly doubt an LLM really understands what it says.
It's bullsh%$, they know that it's bullsh%$, but these snollygosters know that they can exploit the AI mania before it all collapses like a bunch of broccoli.
If the US government were to actually prosecute tech bro fraud, we'd see 80% of the giants of Silly-Con valley in the dock.
Google AI overview suggests adding glue to get cheese to stick to pizza, and it turns out the source is an 11 year old Reddit comment from user F*cksmith 😂 pic.twitter.com/uDPAbsAKeO
— Peter Yang (@petergyang) May 23, 2024
I just came across an OP/ED by him in the New York Times where he dropped some much needed reality on Chat GPT.
At the time he wrote it, before the glorified ELIZA program became became, "Glue on Pizza," and other suck insanity.
Chomsky argues that not only can't large language models incapable of achieving intelligence, they cannot ever achieve language either:
Jorge Luis Borges once wrote that to live in a time of great peril and promise is to experience both tragedy and comedy, with “the imminence of a revelation” in understanding ourselves and the world. Today our supposedly revolutionary advancements in artificial intelligence are indeed cause for both concern and optimism. Optimism because intelligence is the means by which we solve problems. Concern because we fear that the most popular and fashionable strain of A.I. — machine learning — will degrade our science and debase our ethics by incorporating into our technology a fundamentally flawed conception of language and knowledge.
OpenAI’s ChatGPT, Google’s Bard and Microsoft’s Sydney are marvels of machine learning. Roughly speaking, they take huge amounts of data, search for patterns in it and become increasingly proficient at generating statistically probable outputs — such as seemingly humanlike language and thought. These programs have been hailed as the first glimmers on the horizon of artificial general intelligence — that long-prophesied moment when mechanical minds surpass human brains not only quantitatively in terms of processing speed and memory size but also qualitatively in terms of intellectual insight, artistic creativity and every other distinctively human faculty.That day may come, but its dawn is not yet breaking, contrary to what can be read in hyperbolic headlines and reckoned by injudicious investments. The Borgesian revelation of understanding has not and will not — and, we submit, cannot — occur if machine learning programs like ChatGPT continue to dominate the field of A.I. However useful these programs may be in some narrow domains (they can be helpful in computer programming, for example, or in suggesting rhymes for light verse), we know from the science of linguistics and the philosophy of knowledge that they differ profoundly from how humans reason and use language. These differences place significant limitations on what these programs can do, encoding them with ineradicable defects.
………
The human mind is not, like ChatGPT and its ilk, a lumbering statistical engine for pattern matching, gorging on hundreds of terabytes of data and extrapolating the most likely conversational response or most probable answer to a scientific question. On the contrary, the human mind is a surprisingly efficient and even elegant system that operates with small amounts of information; it seeks not to infer brute correlations among data points but to create explanations.
………
Indeed, such programs are stuck in a prehuman or nonhuman phase of cognitive evolution. Their deepest flaw is the absence of the most critical capacity of any intelligence: to say not only what is the case, what was the case and what will be the case — that’s description and prediction — but also what is not the case and what could and could not be the case. Those are the ingredients of explanation, the mark of true intelligence.
I would suggest you read the whole article, and then, if you are a betting man looking for an investment opportunity, find a way to short AI over the next 18-36 months.
The colloquies with Chat GPT embedded in the article are a hoot, particularly if you, as I did, first encountered the ELIZA program, which was a simulated therapist in 1980, and heard exactly the same sort of meaningless parroting of communication. ![]()
That being said, I do not think that his slowing down is an artifact of any age related infirmity.
I think that it is the result of over two decades of hard work.
The younger Joe Biden was a human gaffe machine, and while the press loved this, it is clear that Joe Biden did not, so he has spent the last (at least) 20 years teaching himself to take a pause and think about what he says before he says it.
It's not disability, it's self discipline.
The next time someone comments on this, your response should be, "Yeah, no more gaffes. How'd he do that? My kids would like to know, so that I can stop embarrassing the hell out of them."
So it's back to %$#ing the swear words.
I am not quite sure why, when I started this shitty little blog, I decided to Bowdlerize the swear words, but the events of January 6 2021 required actual Anglo Saxon expletives, and I have continued to do this every January since.
I guess that it is my equivalent of Dry January (Come to think of it, I've actually not had anything to drink this month except for a teaspoon of cider that I am brewing in order to see if it needed more sugar), or Vegan January.
I'm sorry, but like all good things, this has to come to an end.
Let me leave you with one final Word:
Fuck!
It is, "Enshittification," a neologism coined by Cory Doctorow to describe the path of companies, particularly digital companies, become worse and worse as time goes on:
The American Dialect Society, in its 34th annual words-of-the-year vote, selected “enshittification” as the Word of the Year for 2023. More than three hundred attendees took part in the deliberations and voting, in an event hosted in conjunction with the Linguistic Society of America’s annual meeting.
The term enshittification became popular in 2023 after it was used in a blog post by author Cory Doctorow, who used it to describe how digital platforms can become worse and worse. “Here is how platforms die: first, they are good to their users; then they abuse their users to make things better for their business customers; finally, they abuse those business customers to claw back all the value for themselves. Then, they die. I call this enshittification,” Doctorow wrote on his Pluralistic blog.
Presiding at the Jan. 6 voting session were Ben Zimmer, chair of the ADS New Words Committee and language columnist for the Wall Street Journal, and Dr. Kelly Elizabeth Wright of Virginia Tech, data czar of the New Words Committee. “Enshittification is a sadly apt term for how our online lives have become gradually degraded,” Zimmer said. “From the time that it first appeared in Doctorow’s posts and articles, the word had all the markings of a successful neologism, being instantly memorable and adaptable to a variety of contexts.”
What Doctorow describes is a situation where the companies set themselves as a monopoly seller to their customers and a monopsony purchaser of users.
Think about how Google has 90% of the search market, and so has a monopoly on the product, which is you, and they are a part of a meticulously constructed oligopoly, along with Apple, Facebook, and Amazon, on advertising.
The net result is shitty search, and increasingly ineffective and expensive advertising.
In a word, the 2023 word of the year, this is "Enshittification."
Ever since the January 2021 insurrection, an event that I thought gave me license to actually say things like shit and fuck, I have reserved January for actual swearing.
While I am sure my reader(s) delicate sensibilities might be triggered by this, so consider this a fucking warning.
I will be fucking swearing all this month, though I will, as always, not use the C-word. That has always crossed a line for me.
Why did I pick January?
First it is a tradition from the 2021 insurrection, and second, given that January is a time for reviewing the previous year, it is difficult to say, "2023 was fucked up and shit," without actually saying, "Fuck" and, "Shit."
As to why have such a month at all? It is because I need this month, because everything is fucked up and shit.Affect = Fuck around
— Jaqueline Godless (@YesSumClever) November 2, 2021
Effect = Find out
Maybe it is just me, but I have always had problems with the effect/affect thing.
This helps a lot.
Breaking my 11-month-a-year profanity embargo, because ……… Fuck CIGNA.
I was one of their customers at one point, I am quite sincere when I say ……… Fuck CIGNA.
My regular reader(s) are probably that I have expressed dissatisfaction with the insurer in the past.
So I am amused that CIGNA is being sued for using software that allowed its examiners to deny thousands of claims per hour.
Here is hoping that they are hung out to dry:
Cigna, the healthcare and insurance giant, was hit with a lawsuit on Monday that alleges the company systematically rejects claims in a matter of seconds, thanks to an algorithmic system put in place to help automate the process—further raising questions about how technology could harm patients as more healthcare organizations look to embrace AI and other new tools.
………
The health insurer’s digital claims system, called PXDX, is an “improper scheme designed to systematically, wrongfully, and automatically deny its insureds medical payments owed to them under Cigna’s insurance policies,” the complaint alleges.
………
The suit follows a Propublica investigation in March that detailed Cigna’s software system for approving and denying claims in batches. The algorithm works by flagging discrepancies between a diagnosis and what Cigna considers “acceptable tests and procedures for those ailments,” according to the lawsuit.
Over two months last year, the company denied more than 300,000 claims, spending an average of 1.2 seconds on each claim, Propublica reported. While medical doctors signed off on the denials, the system didn’t require them to open patient medical records for the review. The complaint says that this violates a California competition law for unfair and fraudulent business acts. The suit also alleges the system violates the state’s insurance code for failing to adopt a “reasonable standard” for processing claims.
1.2 seconds in 1 hour, that is 3,000 denials per hour per agent.
I was not engaging in hyperbole above.
Somehow, I do not think that any jury will think that 3000 denials an hour an honest business practice.
Let me finish by saying, ……… Fuck CIGNA.

I reserve the right to reprint any email correspondence on my blog.
If you want to keep your correspondence private, please tell me.
A member of the Democratic wing of the Democratic party, and a fan of Bernie who thinks Neoliberal (DLC/New Dem) trickle down conomics sucks.Mechanical Engineer with a background in defense, electronics packaging, medical & food equipment, transportation, and manufacturing.
In my spare time (Hah!), I am the developer of the Firefox addon, bbCode for Web Extensions (bbCodeWebEx).
I have two cats, a black cat, and a gray and white long hair cat, who keep me on my toes. (Because he keeps attacking my feet)
I am a Jew and a Zionist, who is married to a woman with exquisitely bad taste in men, and I have two remarkable children with her.
It's a posting ground for my more-or-less annual personal newsletter, 40 Years in the Desert.(PDF's available at link)
I find that if I wait until year's end I miss stuff from earlier in the year.
40 Years is put out the old fashioned way, it's printed out on ledger sized paper with 4 pages and mailed to people, total circulation of about 100.
I'm just not the holiday card kind of guy. A warning, if you comment here, I may use it in my paper publication.
You will get credit, and if I can get your postal adress, you will get at least the issue where you are quoted (probably a lot more, I rarely trim my list).
If someone actually wants to pay for an issue...I don't know, I guess a buck, but you can get the PDF's free.
I intend to post at least a couple of times a week,