Hooking sealed classes in switch – Sealed and Hidden Classes

172. Hooking sealed classes in switch

It is not the first time in this book that we present an example of Sealed Classes and switch expressions. In Chapter 2, Problem 61, we have briefly introduced such an example via the sealed Player interface with the goal of covering completeness (type coverage) in pattern labels for switch.If, at that time, you found this example confusing I’m pretty sure that now is clear. However, let’s keep things fresh and let’s have another example starting from this abstract base class:

public abstract class TextConverter {}

And, we have three converters available as follows:

final public class Utf8 extends TextConverter {}
final public class Utf16 extends TextConverter {}
final public class Utf32 extends TextConverter {}

Now, we can write a switch expression to match these TextConverter as follows:

public static String convert(
  TextConverter converter, String text) {     
  return switch (converter) {
    case Utf8 c8 -> “Converting text to UTF-8: ” + c8;
    case Utf16 c16 -> “Converting text to UTF-16: ” + c16;
    case Utf32 c32 -> “Converting text to UTF-32: ” + c32;          
    case TextConverter tc -> “Converting text: ” + tc;
    default -> “Unrecognized converter type”;                            
  };
}

Check out the highlighted lines of code. After the three cases (case Utf8, case Utf16, and case Utf32) we must have one of the case TextConverter or the default case. In other words, after matching Utf8, Utf16, and Utf32, we must have a total type pattern (unconditional pattern) to match any other TextConverter or a default case which typically means that we are facing an unknown converter.If both, the total type pattern and the default label are missing then the code doesn’t compile. The switch expression doesn’t cover all the possible cases (input values) therefore is not exhaustive. This is not allowed, since switch expressions and switch statements that use null and/or pattern labels should be exhaustive.The compiler will consider our switch as non-exhaustive because we can freely extend the base class (TextConverter) with uncovered cases. An elegant solution is to seal the base class (TextConverter) as follows:

public sealed abstract class TextConverter
  permits Utf8, Utf16, Utf32 {}

And, now the switch can be expressed as follows:

return switch (converter) {
  case Utf8 c8 -> “Converting text to UTF-8: ” + c8;
  case Utf16 c16 -> “Converting text to UTF-16: ” + c16;
  case Utf32 c32 -> “Converting text to UTF-32: ” + c32;             
};

This time, the compiler knows all the possible TextConverter types and sees that are all covered in the switch. Since TextConverter is sealed there are no surprises, no uncovered cases can occur. Nevertheless, if later we decide to add a new TextConverter (for instance, we add Utf7 by extending TextConverter and adding this extension in the permits clause) then the compiler will immediately complain that the switch is non-exhaustive, so we must take action and add the proper case for it.At this moment, Utf8, Utf16, and Utf32 are declared as final, so they cannot be extended. Let’s assume that Utf16 is modified to become non-sealed:

non-sealed public class Utf16 extends TextConverter {}

Now, we can extend Utf16 as follows:

public final class Utf16be extends Utf16 {}
public final class Utf16le extends Utf16 {}

Even if we added two subclasses to Utf16 class, our switch is still exhaustive because the case Utf16 will cover Utf16be and Utf16le as well. Nevertheless, we can explicitly add cases for them as long as we add these cases before case Utf16 as follows:

return switch (converter) {
  case Utf8 c8 -> “Converting text to UTF-8: ” + c8;
  case Utf16be c16 -> “Converting text to UTF-16BE: ” + c16;
  case Utf16le c16 -> “Converting text to UTF-16LE: ” + c16;
  case Utf16 c16 -> “Converting text to UTF-16: ” + c16;          
  case Utf32 c32 -> “Converting text to UTF-32: ” + c32;            
};

We have to add case Utf16be and case Utf16le before case Utf16 to avoid dominance errors (Chapter 2, Problem 60).Here is another example of combining Sealed Classes, Pattern Matching for Switch and Java Records for computing the sum of nodes in a binary tree of integers:

sealed interface BinaryTree {
  record Leaf() implements BinaryTree {}
  record Node(int value, BinaryTree left, BinaryTree right)
    implements BinaryTree {}     
}
static int sumNode(BinaryTree t) {
  return switch (t) {
          
    case Leaf nl -> 0;
    case Node nv -> nv.value() + sumNode(nv.left())
                               + sumNode(nv.right());
  };
}

And, here is an example of calling sumNode():

BinaryTree leaf = new Leaf();
BinaryTree s1 = new Node(5, leaf, leaf);
BinaryTree s2 = new Node(10, leaf, leaf);
BinaryTree s = new Node(4, s1, s2);
int sum = sumNode(s);

In this example, the result is 19.

Reinterpreting the Visitor Pattern via sealed classes and type pattern matching for switch – Sealed and Hidden Classes

173. Reinterpreting the Visitor Pattern via sealed classes and type pattern matching for switch

The Visitor Pattern is part of Gang of Four (GoF) design patterns and its goal is to define a new operation on certain classes without the need to modify those classes. You can find on the Internet many excellent resources on this topic, so for the classical implementation we will provide here only the class diagram of our example, while the code is available on GitHub:

Figure 8.7 – Visitor Pattern class diagram (use case)

In a nutshell, we have a bunch of classes (Capacitor, Transistor, Resistor, and ElectricCircuit) that are used to create electrical circuits. Our operation is shaped in XmlExportVisitor (implementation of ElectricComponentVisitor) and consists of printing an XML document containing the electrical circuit specifications and parameters.Before continuing, consider getting familiar with the traditional implementation and output of this example available in the bunded code. Next, let’s assume that we want to transform this traditional implementation via Sealed Classes and Type Pattern Matching for switch. The expected class diagram is simpler (has fewer classes) and it looks as follows:

Figure 8.8 – Visitor Pattern reinterpreted via Sealed Classes and switch patterns

Let’s start the transformation with the ElectricComponent interface. We know that this interface is implemented only by Capacitor, Resistor, Transistor, and ElectricCircuit. So, this interface is a good candidate to become sealed as follows:

public sealed interface ElectricComponent
  permits Capacitor, Transistor, Resistor, ElectricCircuit {}

Notice that we deleted the accept() method from this interface. We no longer need this method. Next, the Capacitor, Resistor, Transistor, and ElectricCircuit become final classes and the accept() implementation is deleted as well.Since we don’t rely on the traditional Visitor Pattern, we can safely remove its specific artifacts such as ElectricComponentVisitor and XmlComponentVisitor.Pretty clean, right? We remained with a sealed interface and four final classes. Next, we can write a switch that visits each component of a circuit as follows:

private static void export(ElectricComponent circuit) {
  StringBuilder sb = new StringBuilder();
  sb.append(“<?xml version=\”1.0\” encoding=\”utf-8\”?>\n”);
  export(sb, circuit);
  System.out.println(sb.toString());
}

The export(StringBuilder sb, ElectricComponent… comps) is the effective visitor:

private static String export(StringBuilder sb,
    ElectricComponent… comps) {
 for (ElectricComponent comp : comps) {
  switch (comp) {
   case Capacitor c ->
    sb.append(“””
        <capacitor>
           <maxImpedance>%s</maxImpedance>
           <dielectricResistance>%s</dielectricResistance>
           <coreTemperature>%s</coreTemperature> 
        </capacitor>
     “””.formatted(c.getMaxImpedance(),
                   c.getDielectricResistance(),
                   c.getCoreTemperature())).toString();
   case Transistor t ->
    sb.append(“””
        <transistor>
           <length>%s</length>
           <width>%s</width>
           <threshholdVoltage>%s</threshholdVoltage> 
        </transistor>
     “””.formatted(t.getLength(), t.getWidth(),
                   t.getThreshholdVoltage())).toString();
   case Resistor r ->
    sb.append(“””
        <resistor>
           <resistance>%s</resistance>
           <clazz>%s</clazz>
           <voltage>%s</voltage>
           <current>%s</current>
           <power>%s</power>    
        </resistor>    
     “””.formatted(r.getResistance(), r.getClazz(),
                   r.getVoltage(), r.getCurrent(),
                   r.getPower())).toString();
   case ElectricCircuit ec ->
    sb.append(“””
        <electric_circuit_%s>          
        %s\
        </electric_circuit_%s>
     “””.formatted(ec.getId(),
          export(new StringBuilder(),
           ec.getComps().toArray(ElectricComponent[]::new)),
           ec.getId()).indent(3)).toString();
  }
 }
 return sb.toString();
}

Mission accomplished! You can find the complete example in the bundled code.

Getting info about saled classes (using reflection) – Sealed and Hidden Classes

174. Getting info about saled classes (using reflection)

We can inspect sealed classes via two methods added as part of the Java Reflection API. First, we have isSealed() which is a flag method useful to check if a class is or isn’t sealed. Second, we have getPermittedSubclasses(), which returns an array containing the permitted classes. Based on these two methods, we can write the following helper to return the permitted classes of a sealed class:

public static List<Class> permittedClasses(Class clazz) {
                      
  if (clazz != null && clazz.isSealed()) {
    return Arrays.asList(clazz.getPermittedSubclasses());
  }
  return Collections.emptyList();
}

We can easily test our helper via the Fuel model as follows:

Coke coke = new Coke();
Methane methane = new Methane();
   
// [interface com.refinery.fuel.SolidFuel,
//  interface com.refinery.fuel.LiquidFuel,
//  interface com.refinery.fuel.GaseousFuel]         
System.out.println(“Fuel subclasses: “
  + Inspector.permittedClasses(Fuel.class));
// [class com.refinery.fuel.Coke,
//  class com.refinery.fuel.Charcoal]
System.out.println(“SolidFuel subclasses: “
  + Inspector.permittedClasses(SolidFuel.class));
// []
System.out.println(“Coke subclasses: “
  + Inspector.permittedClasses(coke.getClass()));
// [class com.refinery.fuel.Chloromethane,
//  class com.refinery.fuel.Dichloromethane]
System.out.println(“Methane subclasses: “
  + Inspector.permittedClasses(methane.getClass()));

I think you got the idea!

175. Listing top 3 Sealed Classes’ benefits

Maybe you have your own top 3 Sealed Classes’ benefits that don’t match the following list. That’s ok, no problem, they are benefits after all Sealed Classes sustain better design and clearly expose their intentions: Before using Sealed Classes, we have to rely only on the final keyword (which is expressive enough), and package-private classes/constructors. Obviously, package-private code needs some reading between the lines to understand its intention since is not easy to spot a closed hierarchy modeled via this hack. On the other hand, Sealed Classes expose their intentions very clear and expressive.The compiler can rely on sealed classes to perform finer checks on our behalf: Nobody can sneak a class in a hierarchy closed via Sealed Classes. Any such attempt is rejected via a clear and meaningful message. The compiler is guarding for us and acts as the first line of defense against any accidental/non-accidental attempt to use our closes hierarchies in an improper way.Sealed Classes help the compiler to provide better pattern matching: You’ve experimented with this benefit in Problem 172. The compiler can rely on Sealed Classes to determine if a switch has covering all the possible input values and therefore is exhaustive. And, this is just the beginning of what Sealed Classes can do for pattern matching.

Briefly introducing Hidden Classes – Sealed and Hidden Classes

176. Briefly introducing Hidden Classes

Hidden Classes have been introduced in JDK 15 under JEP 371. Their main goal is to be used by frameworks as dynamically generated classes. They are runtime-generated classes with a short lifespan that are used by frameworks via reflection.

Hidden Classes cannot be used directly by the bytecode or other classes. They are not created via a class loader. Basically, a Hidden Class has the class loader of the lookup class.

Among other characteristics of Hidden Classes, we have that:

They are not discoverable by the JVM internal linkage of bytecode or by the explicit usage of class loaders (they are invisible to methods such as Class.forName(), Lookup.findClass(), or ClassLoader.findLoadedClass()). They don’t appear in stack traces.

They extend Access Control Nest (ACN) with classes that cannot be discovered.

Frameworks can define Hidden Classes as many as needed since they benefit from aggressive unloading. This way, a large number of Hidden Classes shouldn’t have a negative impact on performance. They sustain efficiency and flexibility.

They cannot be used as field/return/parameter type. They cannot be superclasses.

They can access their code directly without the presence of a class object.

They can have final fields, and those fields cannot be modified regardless of their accessible flags.

They deprecated the misc.Unsafe::defineAnonymousClass, which is a non-standard API. Starting with JDK 15, lambda expression uses Hidden Classes instead of anonymous classes.

Next, let’s see how we can create and use a Hidden Class.

177. Creating a hidden class

Let’s assume that our hidden class is named InternalMath and is as simple as follows:

public class InternalMath {
  
  public long sum(int[] nr) {
    return IntStream.of(nr).sum();
  }
}

As we mentioned in the previous problem, Hidden Classes have the same class loader as the lookup class which can be obtained via MethodHandles.lookup() as follows:

MethodHandles.Lookup lookup = MethodHandles.lookup();

Next, we must know that Lookup contains a method named defineHiddenClass(byte[] bytes, boolean initialize, ClassOption… options). The most important argument is represented by the array of bytes that contains the class data. The initialize argument is a flag specifying if the Hidden Class should be initialized or not, while the options argument can be NESTMATE (the created hidden class become a nestmate of the lookup class and has access to all the private members in the same nest) or STRONG (the created hidden class can be unloaded only if its defining loader is not reachable).So, our goal is to obtain the array of bytes which contains the class data. For this, we rely on getResourceAsStream() and JDK 9, readAllBytes() as follows:

Class<?> clazz = InternalMath.class;      
String clazzPath = clazz.getName()
  .replace(‘.’, ‘/’) + “.class”;
InputStream stream = clazz.getClassLoader()
  .getResourceAsStream(clazzPath);      
byte[] clazzBytes = stream.readAllBytes();

Having clazzBytes in our hands, we can create the hidden class as follows:

Class<?> hiddenClass = lookup.defineHiddenClass(clazzBytes,
  true, ClassOption.NESTMATE).lookupClass();

Done! Next, we can use the hidden class from inside our framework as follows:

Object obj = hiddenClass.getConstructor().newInstance();
Method method = obj.getClass()
  .getDeclaredMethod(“sum”, int[].class);
System.out.println(method.invoke(
  obj, new int[] {4, 1, 6, 7})); // 18

As you can see, we use the hidden class via reflection. The interesting part here is represented by the fact that we cannot cast the hidden class to InternalMath, so we use Object obj = …. So, this will not work:

InternalMath obj = (InternalMath) hiddenClass
  .getConstructor().newInstance();

However, we can define an interface implemented by the hidden class:

public interface Math {}
public class InternalMath implements Math {…}

And, now we can cast to Math:

Math obj = (Math) hiddenClass.getConstructor().newInstance();

Starting with JDK 16, the Lookup class was enriched with another method for defining a hidden class named defineHiddenClassWithClassData(byte[] bytes, Object classData, boolean initialize, ClassOption… options). This method needs the class data obtained via MethodHandles.classData(Lookup caller, String name, Class<T> type) or MethodHandles.classDataAt(Lookup caller, String name, Class<T> type, int index). Take your time to explore this further.

Summary

This chapter covered 13 problems. Most of them were focused on the sealed classes feature. The last two problems provided a brief coverage of hidden classes.

Working with mapMulti() 2 – Functional style programming – extending API

Each Author has a list of books. So, a List<Author> (candidate to become Stream<Author>) will nest a List<Book> (candidate to become a nested Stream<Book>) for each Author. Moreover, we have the following simple model for mapping an author and a single book:

public class Bookshelf {
  private final String author;
  private final String book;
  …
}

In functional programming, mapping this one-to-many model to the flat Bookshelf model is a classical scenario for using flatMap() as follows:

List<Bookshelf> bookshelfClassic = authors.stream()
  .flatMap(
    author -> author.getBooks()
                    .stream()
                    .map(book -> new Bookshelf(
                       author.getName(), book.getTitle()))
  ).collect(Collectors.toList());

The problem with flatMap() is that we need to create a new intermediate stream for each author (for a large number of authors this can become a performance penalty) and only afterward we can apply the map() operation. With mapMulti() we don’t need these intermediate streams and the mapping is straightforward:

List<Bookshelf> bookshelfMM = authors.stream()
  .<Bookshelf>mapMulti((author, consumer) -> {
     for (Book book : author.getBooks()) {
       consumer.accept(new Bookshelf(
         author.getName(), book.getTitle()));
     }
  })
  .collect(Collectors.toList());

This is a one-to-many mapping. For each author, the consumer buffers a number of Bookshelf instances equal to the number of author’s books. These instances are flattened over the downstream and are finally collected in a List<Bookshelf> via the toList() collector.And, this is the road to the following use case of mapMulti():

The mapMulti() intermediate operation is useful when we have to replace just a few elements of the stream. This statement is formulated in the official documentation as follows: “When replacing each stream element with a small (possibly zero) number of elements”.

Check out this example based on flatMap():

List<Bookshelf> bookshelfGt2005Classic = authors.stream()
  .flatMap(
    author -> author.getBooks()
      .stream()
      .filter(book -> book.getPublished().getYear() > 2005)
      .map(book -> new Bookshelf(
         author.getName(), book.getTitle()))
  ).collect(Collectors.toList());

This example fits perfectly for using mapMulti(). An author has a relatively small number of books and we apply a filter on those books. So, basically, we replace each stream element with a small (possibly 0) number of elements:

List<Bookshelf> bookshelfGt2005MM = authors.stream()
  .<Bookshelf>mapMulti((author, consumer) -> {
    for (Book book : author.getBooks()) {
      if (book.getPublished().getYear() > 2005) {
        consumer.accept(new Bookshelf(
          author.getName(), book.getTitle()));
      }
    }
  })
  .collect(Collectors.toList());

This is better since we reduce the number of intermediate operations (no more filter() calls) and we avoided intermediate streams. I’ll say that this is a little bit more readable as well.Another use case of mapMulti() sound like this:

The mapMulti() operation is also useful when the imperative approach is preferable against the stream approach. This statement is formulated in the official documentation as follows: “When it is easier to use an imperative approach for generating result elements than it is to return them in the form of a Stream”.

Imagine that we have added in the Author class the following method:

public void bookshelfGt2005(Consumer<Bookshelf> consumer) {
  for (Book book : this.getBooks()) {
    if (book.getPublished().getYear() > 2005) {
      consumer.accept(new Bookshelf(
        this.getName(), book.getTitle()));
    }
  }
}

Now, we get obtain the List<Bookshelf> by simply using mapMulti() as follows:

List<Bookshelf> bookshelfGt2005MM = authors.stream()
  .<Bookshelf>mapMulti(Author::bookshelfGt2005)
  .collect(Collectors.toList());

How cool is this?! In the next problem, we will use mapMulti() in another scenario.

Working with mapMulti() – Functional style programming – extending API

178. Working with mapMulti()

Starting with JDK 16, the Stream API was enriched with a new intermediate operation, named mapMulti(). This operation is represented by the following default method in the Stream interface:

default <R> Stream<R> mapMulti​(
  BiConsumer<? super T,? super Consumer<R>> mapper)

Let’s follow the learning-by-example approach and let’s consider the next classical example that uses a combination of filter() and map() to filter even integers and double their value:

List<Integer> integers = List.of(3, 2, 5, 6, 7, 8);
List<Integer> evenDoubledClassic = integers.stream()
  .filter(i -> i % 2 == 0)
  .map(i -> i * 2)
  .collect(toList());

The same result can be obtained via mapMulti() as follows:

List<Integer> evenDoubledMM = integers.stream()
  .<Integer>mapMulti((i, consumer) -> {
     if (i % 2 == 0) {
       consumer.accept(i * 2);
     }
  })
  .collect(toList());

So, instead of using two intermediate operations, we used only one, mapMulti(). The filter() role was replaced by an if statement, and the map() role is accomplished in the accept() method. This time, we filtered the evens and doubled their values via mapper which is a BiConsumer<? super T,​? super Consumer<R>>. This bi-function is applied to each integer (each stream element), and only the even integers are passed to the consumer. This consumer acts as a buffer that simply passes downstream (in the stream pipeline) the received elements. The mapper.accept(R r) can be called any number of times, which means that, for a given stream element, we can produce as many output elements as we need. In the previous example, we have a one-to-zero mapping (when the i % 2 == 0 is evaluated as false), and a one-to-one mapping (when the i % 2 == 0 is evaluated as true).

More precisely, mapMulti() gets an input stream of elements and outputs another stream containing 0, less, the same, or a larger number of elements that can be unaltered or replaced by other elements. This means that each element from the input stream can pass through a one-to-zero, one-to-one, or one-to-many mapping.

Have you noticed the <Integer>mapMulti(…) type-witness applied to the returned value? Without this type-witness the code will not compile because the compiler cannot determine the proper type of R. This is the shortcoming of using mapMulti(), so, we have to pay this price.For primitive types (double, long, and int) we have mapMultiToDouble(), mapMultiToLong(), and mapMultiToInt() which return DoubleStream, LongStream, and IntStream. For instance, if we plan to sum the even integers then using mapMultiToInt() is a better choice than mapMulti() since we can skip the type-witness and work only with primitive int:

int evenDoubledAndSumMM = integers.stream()
  .mapMultiToInt((i, consumer) -> {
     if (i % 2 == 0) {
       consumer.accept(i * 2);
     }
  })
  .sum();

On the other hand, whenever you need a Stream<T> instead of Double/Long/IntStream, you still need to rely on mapToObj() or boxed():

List<Integer> evenDoubledMM = integers.stream()
  .mapMultiToInt((i, consumer) -> {
    if (i % 2 == 0) {
      consumer.accept(i * 2);
    }
  })
  .mapToObj(i -> i) // or, .boxed()
  .collect(toList());

Once you get familiar with mapMulti() you start to realize that it is pretty similar to the well-known flatMap() which is useful to flatten a nested Stream<Stream<R>> model. Let’s consider the following one-to-many relationship:

public class Author {
  private final String name;
  private final List<Book> books;
  …
}
public class Book {
   
  private final String title;
  private final LocalDate published;
  …
}

Exemplifying method reference vs. lambda – Functional style programming – extending API

180. Exemplifying method reference vs. lambda

Have you ever written a lambda expression and your IDE advises you to replace it with a method reference? Of course, you did! And, I’m sure that you preferred to follow the replacement because name matters, and method references are often more readable than lambdas. While this is a subjective matter, I’m pretty sure that you agree that extracting long lambdas in methods and using/re-using them via method reference is a generally accepted good practice.But, beyond some esoteric JVM internal representations, are they behaving the same? Is any difference between a lambda and a method reference that may affect how the code behaves?Well, let’s assume that we have the following simple class:

public class Printer {
  Printer() {
    System.out.println(“Reset printer …”);
  }
     
  public static void printNoReset() {
    System.out.println(
      “Printing (no reset) …” + Printer.class.hashCode());
  }
      
  public void printReset() {
    System.out.println(“Printing (with reset) …”
      + Printer.class.hashCode());
  }
}

If we assume that p1 is a method reference and p2 is the corresponding lambda then we can perform the following calls:

System.out.print(“p1:”);p1.run();
System.out.print(“p1:”);p1.run();
System.out.print(“p2:”);p2.run();
System.out.print(“p2:”);p2.run();
System.out.print(“p1:”);p1.run();
System.out.print(“p2:”);p2.run();

Next, let’s see two scenarios of working with p1 and p2.

Scenario 1: Call printReset()

In the first scenario, we call printReset() via p1 and p2 as follows:

Runnable p1 = new Printer()::printReset;
Runnable p2 = () -> new Printer().printReset();

If we run the code right now then we get this output (the message generated by the Printer constructor):

Reset printer …

This output is caused by the method reference, p1. The Printer constructor is invoked right away even if we didn’t call the run() method. Because p2 (the lambda) is lazy, the Printer constructor is not called until we call the run() method. Going further, we fire the chain of run() calls for p1 and p2. The output will be:

p1:Printing (with reset) …1159190947
p1:Printing (with reset) …1159190947
p2:Reset printer …
Printing (with reset) …1159190947
p2:Reset printer …
Printing (with reset) …1159190947
p1:Printing (with reset) …1159190947
p2:Reset printer …
Printing (with reset) …1159190947

If we analyze this output we notice that the Printer constructor is called each time the lambda (p2.run()) is executed. On the other hand, for the method reference (p1.run()) the Printer constructor is not called. It was called a single time, at p1 declaration. So, p1 is printing without resetting the printer. This can be a major aspect!

Scenario 2: Call static printNoReset()

Next, let’s call the static method printNoReset():

Runnable p1 = Printer::printNoReset;
Runnable p2 = () -> Printer.printNoReset();

If we run the code right away then nothing will happen (no output). Next, we fire up the run() calls, and we get this output:

p1:Printing (no reset) …149928006
p1:Printing (no reset) …149928006
p2:Printing (no reset) …149928006
p2:Printing (no reset) …149928006
p1:Printing (no reset) …149928006
p2:Printing (no reset) …149928006

The printNoReset() is a static method, so the Printer constructor is not invoked. We can interchangeably use p1 or p2 without having any difference in behavior. So, in this case, is just a matter of preference.

Conclusion

When calling non-static methods, there is a main difference between method reference and lambda. Method reference calls the constructor immediately and only once (at method invocation (run()), the constructor is not called). On the other hand, lambdas are lazy. They call the constructor only at method invocation and at each such invocation (run()).

Hooking lambda laziness via Supplier/Consumer – Functional style programming – extending API

181. Hooking lambda laziness via Supplier/Consumer

The java.util.function.Supplier is a functional interface capable to supply results via its get() method. The java.util.function.Consumer is another functional interface capable to consume the argument given via its accept() method. It returns no result (void). Both of these functional interfaces are lazy, so it is not quite simple to analyze and understand a code that implies them, especially when a snippet of code implies both of them. Let’s give it a try!Consider the following simple class:

static class Counter {
  static int c;
  public static int count() {
    System.out.println(“Incrementing c from “
      + c + ” to ” + (c + 1));
    return c++;                                   
  }
}

And, let’s write the following Supplier and Consumer:

Supplier<Integer> supplier = () -> Counter.count();
Consumer<Integer> consumer = c -> {
  c = c + Counter.count();
  System.out.println(“Consumer: ” + c );
};

So, at this point, what is the value of Counter.c?

System.out.println(“Counter: ” + Counter.c); // 0

The correct answer is: Counter.c is 0. The supplier and the consumer are lazy, so none of the get() or accept() methods was called at their declarations. The Counter.count() was not invoked so far, so Counter.c was not incremented.Here is a tricky one … how about now?

System.out.println(“Supplier: ” + supplier.get()); // 0

We know that by calling supplier.get() we trigger the Counter.count() execution and Counter.c should be incremented and become 1. However, the supplier.get() will return 0.The explanation resides in the count() method at line return c++;. When we write c++, we use the post-increment operation, so we use the current value of c in our statement (in this case, return) and afterward, we increment it by 1. This means that supplier.get() gets back the value of c as 0, while the incrementation takes place after this return, and Counter.c is now 1:

System.out.println(“Counter: ” + Counter.c); // 1

If we switch from post-increment (c++) to pre-increment (++c) then supplier.get() will get back the value of 1 which will be in sync with Counter.c. This is happening because the incrementation takes place before the value is used in our statement (here, return).Ok, so far we know that Counter.c is equal to 1. Next, let’s call the consumer and let’s pass in the Counter.c value:

consumer.accept(Counter.c);       

Via this call, we push the Counter.c (which is 1) in the following computation and display:

c -> {
  c = c + Counter.count();
  System.out.println(“Consumer: ” + c );
} // Consumer: 2

So, c = c + Counter.count() can be seen as Counter.c = Counter.c + Counter.count() which is equivalent to 1 = 1 + Counter.count(), so 1 = 1 + 1. The output will be: Consumer: 2. This time, Counter.c is also 2 (remember the post-increment effect):

System.out.println(“Counter: ” + Counter.c); // 2    

Next, let’s invoke the supplier:

System.out.println(“Supplier: ” + supplier.get()); // 2

We know that get() will receive the current value of c which is 2. Afterward, Counter.c becomes 3:

System.out.println(“Counter: ” + Counter.c); // 3

We can continue like this forever, but I think you got the idea of how the Supplier and the Consumer functional interfaces work.

Refactoring code to add lambda laziness 2 – Functional style programming – extending API

Fixing in functional fashion

How about providing this fix in a functional programming fashion? Practically, all we want is to lazy download the applicatons’s dependenies. Since laziness is the functional programming spciality, and we’ve just got familiar with the Supplier (see the previous problem) we can start as follows:

public class ApplicationDependency {
        
  private final Supplier<String> dependencies
    = this::downloadDependencies;  
  …
  public String getDependencies() {
    return dependencies.get();
  } 
  …
  private String downloadDependencies() {
         
    return “list of dependencies downloaded from repository ”    
     + Math.random();
  }  
}

First, we defined a Supplier that calls the downloadDependencies() method. We know that the Supplier is lazy so nothing happens until its get() method is explicitly called.Second, we have modified getDependencies() to return dependencies.get(). So, we delay the application’s dependencies downloading until they are explicitly required.Third, we modified the return type of the downloadDependencies() method from void to String. This is needed for the Supplier.get().This is a nice fix but it has a serious shortcoming. We lost the caching! Now, the dependencies will be downloaded at every getDependencies() call.We can avoid this issue via memoization (https://en.wikipedia.org/wiki/Memoization). We have covered this concept in Chapter 8 of The Complete Coding Interview Guide in Java. However, in a nutshell, memoization is a technique used to avoid duplicate work by caching results that can be reused later.Memoization is a technique commonly applied in Dynamic Programming, but there are no restrictions or limitations. For instance, we can apply it in functional programming. In our particular case, we start by defining a functional interface that extends the Supplier interface:

@FunctionalInterface
public interface FSupplier<R> extends Supplier<R> {}

Next, we provide an implementation of FSupplier that basically cashes the unseen results and serve from the cache the already seen ones:

public class Memoize {
  private final static Object UNDEFINED = new Object();
  public static <T> FSupplier<T> supplier(
    final Supplier<T> supplier) {
    AtomicReference cache = new AtomicReference<>(UNDEFINED);     
    return () -> {                      
          
      Object value = cache.get();          
          
      if (value == UNDEFINED) {              
              
        synchronized (cache) {
                 
          if (cache.get() == UNDEFINED) {
                      
            System.out.println(“Caching: ” +  supplier.get());
            value = supplier.get();
            cache.set(value);
          }
        }
      }
          
      return (T) value;
    };
  }
}

Finally, we replace our initial Supplier with FSupplier as follows:

private final Supplier<String> dependencies
  = Memoize.supplier(this::downloadDependencies);

Done! Our functional approach takes advantage of Supplier’s laziness and can cache the results.

Refactoring code to add lambda laziness – Functional style programming – extending API

182. Refactoring code to add lambda laziness

In this problem let’s have a refactoring session meant to transform a dysfunctional code into a functional one. We start from the following given code – a simple class mapping information about applications dependencies:

public class ApplicationDependency {
  
  private final long id;
  private final String name;
  private String dependencies;
  public ApplicationDependency(long id, String name) {
    this.id = id;
    this.name = name;
  }
  public long getId() {
    return id;
  }
  public String getName() {
    return name;
  } 
  
  public String getDependencies() {
    return dependencies;
  }
  
  private void downloadDependencies() {
        
    dependencies = “list of dependencies
      downloaded from repository ” + Math.random();
  }  
}

Why did we highlight the getDependencies() method? Because this is the point in the application having a dysfunctionality. More precisely, the following class needs the dependencies of an application in order to process them accordingly:

public class DependencyManager {
  
  private Map<Long,String> apps = new HashMap<>();
  
  public void processDependencies(ApplicationDependency appd){
      
    System.out.println();
    System.out.println(“Processing app: ” + appd.getName());
    System.out.println(“Dependencies: “
      + appd.getDependencies());
      
    apps.put(appd.getId(),appd.getDependencies());       
  }  
}

This class relies on the ApplicationDependency#getDependecies() method which just returns null (the default value of the dependencies fields). The expected application’s dependencies were not downloaded since the downloadDependecies() method was not called. Most probably, a code reviewer will signal this issue and raise a ticket to fix it.

Fixing in imperative fashion

A possible fix will be as follows (in ApplicationDependency):

public class ApplicationDependency {
  
  private String dependencies = downloadDependencies();
  …
  public String getDependencies() {
           
    return dependencies;
  }
  …
  private String downloadDependencies() {
         
    return “list of dependencies downloaded from repository “
      + Math.random();
  }
}

Calling downloadDependencies() at dependencies initialization will definitely fix the problem of loading the dependencies. When the DependencyManager will call getDependencies() it will have access to the downloaded dependencies. But, is this a good approach? I mean downloading the dependencies is a costly operation and we do it every time an ApplicationDependency instance is created. If the getDependencies() method is never called then this costly operation doesn’t pay off the effort.So, a better approach will be to postpone the download of the application’s dependencies until getDependencies() is actually called:

public class ApplicationDependency {
  private String dependencies;
  …
  public String getDependencies() {
             
    downloadDependencies();      
     
    return dependencies;
  }
  …
  private void downloadDependencies() {
         
    dependencies = “list of dependencies
      downloaded from repository ” + Math.random();
  }  
}

This is better but is not the best! This time, the application’s dependencies are downloaded every time the getDependencies() method is called. Fortunately, there is a quick fix for this. We just need to add a null check before performing the download:

public String getDependencies() {
      
  if (dependencies == null) {
    downloadDependencies();
  }
       
  return dependencies;
}

Done! Now, the application’s dependencies are downloaded only at the first call of the getDependencies() method. This imperative solution works like a charm and will pass the code review.