Parallel programming is important. Now and in the future it is the only way we are going to make anything go faster. Every programmer is told that parallel programming is difficult, so difficult that you really should avoid it at all costs.Then there is the little voice inside of most of us which says "go on, it can't be that hard". Before you give in you probably should read this survey:

The purpose of this book is to help you program shared-memory parallel machines without risking your sanity. We hope that this book's design principles will help you avoid at least some parallel-programming pitfalls. That said, you should think of this book as a foundation on which to build, rather than as a completed cathedral.

Your mission, if you choose to accept, is to help make further progress in the exciting field of parallel programming--progress that will in time render this book obsolete. Parallel programming is not as hard as some say, and we hope that this book makes your parallel-programming projects easier and more fun.

In short, where parallel programming once focused on science, research, and grand-challenge projects, it is quickly becoming an engineering discipline. We therefore examine specific parallel-programming tasks and describe how to approach them. In some surprisingly common cases, they can even be automated.

This book is written in the hope that presenting the engineering discipline underlying successful parallel-programming projects will free a new generation of parallel hackers from the need to slowly and painstakingly reinvent old wheels, enabling them to instead focus their energy and creativity on new frontiers. We sincerely hope that parallel programming brings you at least as much fun, excitement, and challenge that it has brought to us!

The problem of selecting a sub matrix S of a a given matrix M which optimizes the approximation AS of M is important in many practical applications. Put another way, how close can a matrix get to a space spanned by just k of its columns.

If you fix S then A that optimizes the approximation is easily worked out, but how hard is it to find an optimal sub-matrix. About as hard as it gets, it seems:

Let M be a real r×c matrix and let k be a positive integer. In the column subset selection problem (CSSP), we need to minimize the quantity ∥M−SA∥, where A can be an arbitrary k×c matrix, and S runs over all r×k submatrices of M.

This problem and its applications in numerical linear algebra are being discussed for several decades, but its algorithmic complexity remained an open issue. We show that CSSP is NP-complete.

What is more the paper proves that for rational matrices the problem is NP-complete.

The blockchain is a great hope for the future of many application that would otherwise require a central trusted authority, but this only works if you can trust the blockchain. This makes research into security and attacks on the blockchain very important:

In this paper, we identify a new form of attack, called the Balance attack, against proof-of-work blockchain systems. The novelty of this attack consists of delaying network communications between multiple subgroups of nodes with balanced mining power.

Our theoretical analysis captures the precise tradeoff between the network delay and the mining power of the attacker needed to double spend in Ethereum with high probability.

We quantify our probabilistic analysis with statistics taken from the R3 consortium, and show that a single machine needs 20 minutes to attack the consortium. Finally, we run an Ethereum private chain in a distributed system with similar settings as R3 to demonstrate the feasibility of the approach, and discuss the application of the Balance attack to Bitcoin.

Our results clearly confirm that main proof-of-work blockchain protocols can be badly suited for consortium blockchains.

JetBrains has launched the first set of updates for 2024 of its JetBrains IDEs. The new versions include full-line code autocompletion powered by locally run AI models.