We use cookies and other tracking technologies to improve your browsing experience on our site, analyze site traffic, and understand where our audience is coming from. To find out more, please read our privacy policy.

By choosing 'I Accept', you consent to our use of cookies and other tracking technologies.

We use cookies and other tracking technologies to improve your browsing experience on our site, analyze site traffic, and understand where our audience is coming from. To find out more, please read our privacy policy.

By choosing 'I Accept', you consent to our use of cookies and other tracking technologies. Less

We use cookies and other tracking technologies... More

Login or register
to apply for this job!

Login or register to start contributing with an article!

Login or register
to see more jobs from this company!

Login or register
to boost this post!

Show some love to the author of this blog by giving their post some rocket fuel πŸš€.

Login or register to search for your ideal job!

Login or register to start working on this issue!

Engineers who find a new job through Ai Works average a 15% increase in salary πŸš€

Blog hero image

Alpakka (Akka Streams) vs Apache Camel: who wins?

Gabriel 29 May, 2018 (6 min read)

Most software engineers have to work with enterprise integrations, and, since we are all lazy, we love to use stuff that provide things out-of-the-box.

Before I start, I have to say: I already have worked with both Camel and Alpakka, so, I'll try to make the fairest comparison I can.

Apache Camel is a "lightweight ESB" and has been around for sometime now. It is widely adopted and battle-tested on production at many companies, like Cisco, Netflix and JPMorgan. On the other hand, the Alpakka project is relatively new, works on Akka Streams and since last week has its own team focused on its community.

Both backed by two great organizations: Apache Software Foundation and Lightbend.

I'll break this post into several topics, talking about features, qualities and weakness of each one. Oh, I almost forgot: I'll show a few examples and all of them are going to be written in Scala, even the Camel ones.

Community

Many people don't think this is a big deal, but, it is really important that, before starting a big project, you need to be sure about how easy you're going to get help on the community if some problem occurs. With this in mind, of course we have to look at how the libraries are being developed.

Camel

Taking a quick look at Camel, we see that it is a highly active project since always. And it has 11 years. (!!)

Screenshot_2018_05_06_12_54_44.png

Of course, this is not all we have to do before choosing a library, but, it is really nice to know that your project will have a long-term support.

Alpakka

As I said before, Alpakka is relatively new. Their github repository was created on 2016 and have way less contributions than Camel.

Screenshot_2018_05_06_13_01_23.png

I don't really think it'd be a fair comparison since Alpakka uses Akka Streams, consequentially Akka Actors, so, this picture does not show how big the project really is, but, it gives us an idea that it's right on track and contributions are always being made.

Components and connectors

Camel has over 200+ components for basically anything you can imagine: HTTP, AMQP, SQS, S3, WebSockets, MongoDB, JDBC, JMS, Kafka, ZeroMQ... Well, you can check it out over here, but, basically there's a 99% chance that you won't need to write your own component.

Alpakka has way fewer, over 30+ as you can see over here, but, the ones they have probably will satisfy your needs, that includes: AMQP, SQS, S3, WebSockets, Slick, JMS, Kafka, MongoDB. HTTP integration is provided by Akka HTTP (which uses Akka Streams), so, that's the way to go if you need an HTTP integration using Akka Streams. As far as I know, there are other Alpakka connectors on the way.

Type safety

For me, this is one of the most importants aspects to take into account. While the compiler helps you while you're creating your Akka Streams application, Camel on the other hand still works with strings, which is clearly a bug decoy.

Take this as an example:

from("timer:clock?period=1000&delay=1000'")
  .to("http4:localhost:8080/health-check")

Did you notice the error? I hope so, because the compiler did not. The code above compiles just fine, but, it wouldn't happen if we were using Akka Streams, since the initialDelay parameter expects a FiniteDuration type:

Source
    .tick(
      initialDelay = 1 second,
      interval = 1 second,
      tick = ()
    )
    .runWith(healthCheckHttpSink)

Learning curve

That's a trick one, but, I find Camel harder to understand because it does too many magic stuff (like lots of reflection) behind de scenes. Of course, it is highly extensible because of this magic, but, it's kind of hard for people to start to work with because of it.

When I started using Camel at work, most (Java) engineers struggled a lot to get simple things done, but, when I tried Akka Streams, it sounded more natural for our (Scala) engineers to work with. Maybe it has more to do with the fact that most Java developers are not used to write applications using fluent apis like Camel and Scala developers just had to learn a new Scala library that pretty much worked like the other ones, but, I guess it was fair to mention here.

Ease of use

Besides the learning curve, also, you need to know how easy is to use each one of them. For simple tasks, I find Camel much simpler, but, I really don't like it when it is used for big flows and lots of message parsing. Also, the Exchange object is very confusing and abuses of mutable state. If you can to copy all the properties and headers from the input and send to the output, you need to explicitly copy it:

def process(exchange: Exchange) = {
  val body = exchange.getIn().getBody(classOf[String])
  exchange.getOut().setBody(s"Hello $body")
  exchange.getOut().setHeaders(exchange.getIn().getHeaders()
  exchange.getOut().setAttachments(exchange.getIn().getAttachments())
}

OR, if you don't wanna copy all the properties, you can just set the input and Camel will just presume that's the output of your processor (wtf, really????):

def process(exchange: Exchange) = {
  val body = exchange.getIn().getBody(classOf[String])
  exchange.getIn().setBody(s"Hello $body")
}

There are lots of little stuff that I don't really like in Camel, but, I guess the Exchange object and Type Converters annoy me the most.

Don't get me wrong. Akka Streams and Alpakka are a far way from being easy to use, but, semantically they are easier to understand. There's no mutable state, there are not many patterns to follow, it just assures your flow will work as a reactive stream. It doesn't say how you need to handle and transform your data, it just helps you do it.

Testing

Both libraries provides test modules to help you create your tests and assure that your flows work as you expect. I don't think I have much to say about it, only that I don't have anything to complain about anyone in this case.

Reactive Streams

Alpakka is built on Akka Streams, so, natively it implements all the Reactive Streams features. It has pretty nice features, which I'll talk about right after this topic.

Camel on the other hand is not natively reactive, nor give us a good support for asynchronous or non-blocking code, so, you'll have to use the reactive-stream component to handle the parts that need to be reactive. This kind of sucks, really, since if you had your entire code needs to be asyncronous and non-blocking, probably will use only Camel as a chain of responsability instead of an integration library.

Backpressure

Alpakka has backpressure out-of-the-box implemented by Akka Streams, so, that's pretty easy to use it when you're using Alpakka connectors. With Camel you will have to work with the reactive-stream component and backpressure what needs to be backpressured over your flows (outside of camel, of course).

Throttling

Both of them have throttling features, so, it's pretty easy to use it on each one of them:

Camel:

object CamelSample extends App {
  val context: CamelContext = new DefaultCamelContext()

  val greetings = new RouteBuilder() {
    override def configure(): Unit = {
      from("direct:greetings")
        .throttle(1)
        .timePeriodMillis(5000)
        .process(e => s"hello, ${e.getIn.getBody}")
    }
  }

  val tick = new RouteBuilder() {
    override def configure(): Unit = {
      from("timer:clock?period=1000&delay=1000")
        .setBody(simple("Gabriel"))
        .to("direct:greetings")
        .to("stream:out")
    }
  }

  context.addRoutes(greetings)
  context.addRoutes(tick)

  val main = new Main
  main.getCamelContexts.add(context)
  main.run()
}

Akka Streams:

object AkkaStreamsSample extends App {
  implicit val sys = ActorSystem()
  implicit val mat = ActorMaterializer()

  val greetingFlow =
    Flow[String]
      .throttle(1, 5 seconds)
      .map { name => s"hello, $name" }

  Source
    .tick(
      initialDelay = 1 second,
      interval = 1 second,
      tick = "Gabriel"
    )
    .via(greetingFlow)
    .runForeach(println)
}

Asynchronous processing

Camel provides a basic way to work with Futures, specifically CompletableFuture from Java 8, but, basically, all you have to do is create a processor that returns a CompletableFuture as the result:

object AsyncBeanProcessor {
  @Handler
  def process(body: String): CompletableFuture[String] = {
    CompletableFuture
      .supplyAsync(() => s"hi, $body")
  }
}

And, to use it:

from("timer:clock?period=1000&delay=1000")
  .setBody(simple("Gabriel"))
  .bean(AsyncBeanProcessor)
  .to("stream:out")

That's basically it. There's no backpressure. If you're dealing with non-blocking HTTP calls for instance, you will need to control the max parallel requests elsewhere.

On the other hand, Akka Streams gives us a nice api for backpressure when we're dealing with asynchronous processing:

val greetingFlow =
  Flow[String]
    .mapAsync(1) { name =>
      Future { s"hello, $name" }
    }

Source
  .tick(
    initialDelay = 1 second,
    interval = 1 second,
    tick = "Gabriel"
  )
  .via(greetingFlow)
  .runForeach(println)
}

Conclusion

TL;DR? Of course we'll have a simple conclusion. (mostly for the ones who are lazy and don't wanna read the whole post) I have to say I had fun working with both, so, don't get me wrong, I find them great libraries to work with integrations. Well, here the synopsis of this whole post is:

alpakka conclusion.png

I hope I could help and not piss off anyone. If there's anything wrong or if you like this post, please, let me know in the comments.

Thanks!

[]'s

Originally published on www.thedevpiece.com