The Programmers Guide To Kotlin - Covariance & Contravariance
Written by Mike James   
Monday, 21 January 2019
Article Index
The Programmers Guide To Kotlin - Covariance & Contravariance
Generic Variance
Type Projections

What is covariance and contravariance and what do these physics based terms have to do with programming in Kotlin? Read on to find out.

Programmer's Guide To Kotlin Third Edition

kotlin3e360

You can buy it from: Amazon

Contents

  1. What makes Kotlin Special
  2. The Basics:Variables,Primitive Types and Functions 
  3. Control
         Extract: If and When 
  4. Strings and Arrays
  5. The Class & The Object
  6. Inheritance
  7. The Type Hierarchy
  8. Generics
  9. Collections, Iterators, Sequences & Ranges
        Extract: Iterators & Sequences 
  10. Advanced functions 
  11. Anonymous, Lamdas & Inline Functions
  12. Data classes, enums and destructuring
        Extract: Destructuring 
  13. Exceptions, Annotations & Reflection
  14. Coroutines
        Extract: Coroutines 
  15. Working with Java
        Extract: Using Swing
  16. Compose Multiplatform
        Extract: Compose Layout ***NEW!

<ASIN:B0D8H4N8SK>

Covariance & Contravariance

This is one of the most complicated of the generic topics and it is made more complicated by the use of some advanced sounding terminology. However, it isn't as difficult as many explanations and examples would have you believe. Even so, many users, and even designers of generic classes, don't have to understand what is going on at first. so come back and read this when you need to and once you have a good grasp of inheritance.

The first thing to understand is that inputs behave differently to outputs.

If you recall, derived classes are "bigger" than base classes because they have everything that the base class has, and possibly some additional methods and properties. You can think of this as defining a partial order on the classes.

If class B is derived from class A you can write A>B, indicating that A is higher in the class hierarchy than B, even though B potentially has actually more methods than A. This is confusing, but it is widely used. If A>B then B can be used anywhere that A can - this is the Liskov Substitution principle, and it is more of an ideal than a principle or a practical reality.

You can extend this idea and say that any entity B that can be used anywhere A can, satisfies A>B even if A isn't in any other sense a base entity for B.  

Now consider the following function:

fun MyFunction1(a:MyClassA){

 create myObjectB an instance of MyClassB
 return myObjectB
}

As MyClassB is derived from MyClassA, i.e. MyClassA>MyClassB, we can pass in an instance of MyClassB because it has everything an instance of MyClassA has and more.

It is fine for the function to treat the MyClassB instance as a MyClassA instance. The output returned by the function is an instance of  MyClassB and, by the same reasoning, the calling program is safe to treat this as a MyClassA.  

Looking at this in a slightly different way, what does it mean for the function?

Consider the function redefined to accept a MyClassB instance with no other changes. That is:

MyFunction2(a:MyClassB){ … }

Now you can see that, as MyFunction1 can accept an instance of MyClassB, it can trivially be used anywhere MyFunction2 is, but MyFunction2 cannot accept a MyClassA and cannot be used anywhere a MyFunction1 is.

This means that we can regard:

MyFunction2>MyFunction1

Notice that: 

MyClassA>MyClassB

has resulted in the conclusion that:

MyFunction1(MyClassA)< MyFunction2(MyClassB)

This is called contravariance and in general we say that if A>B means that G(A)<G(B) where G is a type that involves the other classes, then the relationship is contravariant.

Put in even simpler language, if you construct a new type involving an existing type then it is contravariant if the construction reverses the “use in place of” relationship. Inputs are generally contravariant for the reasons outlined above. 

Now consider the same argument but for the output parameter.

For MyFunction1 this is of type MyClassB. A function, MyFunction2, that returns a MyClassA but is otherwise identical, cannot be used in its place, but MyFunction1 can be used in place of MyFunction2. This means that MyFunction2>MyFunction1 because MyFunction1 can be used anywhere MyFunction2 can. 

In this case we have:

MyClassA>MyClassB

which implies:

MyFunction2(){return MyClassA}>MyFunction1(){return MyClassB}

This is an example of covariance and in simple terms this means if you construct a new type involving an existing type, then it is covariant if the construction follows the same the “use in place of” relationship.

In general, outputs are covariant. 

Now that you have looked at the way that a change to a function affects its type, we can generalize the idea of covariance and contravariance to any situation, not just where functions are involved.

Suppose we have two types A and B and we have a modification, or transformation G,  that we can make to both of them to give new types G(A) and G(B).

  • If G is a covariant transformation we have A>B implies G(A)>G(B). Outputs are covariant. 

  • If G is a contravariant transformation then we have A>B implies G(A)<G(B). Inputs are contravariant.

  • It is also possible that neither relationship applies. That is A>B doesn't imply anything about the relationship between G(A) and G(B). In this case G is referred to as invariant – which isn't really a good name.

In the case of our example we had two transformations G1, which converted the type into the input parameter – a contravariant transform, and G2, which converted the type into the return result – a covariant transform. 

It can be very difficult to keep all of this in your head when reasoning about particular data types – arrays for example – but eventually you get used to it.

<ASIN:1871962536>

<ASIN:1871962544>



Last Updated ( Monday, 21 January 2019 )