Google is funding an AI project that will introduce the technical concept of regret into programs - but there's a big difference between regret and being sorry.
This probably isn't the news story you might be expecting. It is a story of misunderstanding and the media.
If I gave you a press release saying that a university research group was being funded by Google to program regret into computers you might start to think about the psychology of machine and perhaps even sci fi worries like Sky Net or HAL or similar machine that showed too much artificial intelligence with feeling. Certainly the media have been writing lots of stories along the same lines, even though some have been careful to reproduce the line
"Of course computers can't "feel" regret..."
but then ignored it and speculated on computers that do indeed feel regret. The disclaimer has not stopped headlines like "Google wants computers to feel regret", "Google looks to program regret and hindsight into computers", "Google wants to teach computers regret" and so on.
Google is funding a project at Tel Aviv University's Blavatnik School of Computer Science led by Professor Yishay Mansour. The project is an application of reinforcement learning (RL) principles. In RL the learning agent doesn't necessarily know how to improve its performance but it does receive a reward that depends on how well it does. The reward can be positive or negative and the idea is that positive rewards reinforce the most recent behaviour so that it is more likely to happen again. RL is a form of unsupervised learning in that only the performance of the agent is used as feedback - there is no teacher to push the agent in the correct direction.
RL has its own jargon just like any area of research and the difference between the maximum reward and the actual reward received is called the "regret". In other words, an RL agent either tries to maximize the average long term reward or minimize the average long term regret.
Now think about the Google sponsored project again. This time keeping in mind that regret is just numerical measure of the difference between what the agent could have received and what it actually received. Now the headline should read "Google funds a project to implement an optimization algorithm". The research is probably just as potentially useful, but it is hardly the sensational story that is currently doing the rounds.
There are two things to learn from this situation. The first is that just because some numerical measure is called "regret" it doesn't mean it has anything to do with the common use of the term. Secondly if you are going to invent an AI technique then picking emotive words for your jargon is a good way to ensure publicity.
American Friends Tel Aviv University