Noobs guide to Cloud Native microservices in C++
6 min read

Noobs guide to Cloud Native microservices in C++

The title is for good SEO. If you are thinking you will be doing that in just C++, please reconsider your decisions. I would suggest to use Go for stuff like this. And that would be everything for warnings.

My choice of tools to work with C++ is of course CMake, vcpkg and GCC. Things change depending on the requirements but for the most part these are constant.

Talking about microservices, lets first get up an echo service going in C++. I have been told that cpprestsdk is a pretty good library for writing REST APIs and client, so definitely I wanted to explore it. First step would be getting a CMakeLists.txt and an accompanying vcpkg.json.

cmake_minimum_required(VERSION 3.0)

set(PROJECT_NAME service)
project(${PROJECT_NAME})

set(CMAKE_BUILD_TYPE Debug)
set(CMAKE_CXX_STANDARD 20)

find_package(cpprestsdk CONFIG REQUIRED)

set(SOURCES src/main.cc)

add_executable(${PROJECT_NAME} ${SOURCES})
target_link_libraries(${PROJECT_NAME} cpprestsdk::cpprest)
CMakeLists.txt for the service

I named my service, "service", pretty creative but you can choose whatever you want. Next up is vcpkg.json,

{
    "name": "service",
    "version-string": "0.0.1",
    "dependencies": [
        "cpprestsdk"
    ]
}
vcpkg.json for service

Now since we need to handle the requests sent to our server, time to write a http handler,

void echo_handler(web::http::http_request req) {
    req.reply(web::http::status_codes::OK, "It works\n");
}
http handler to handle requests

Wait is it that easy? Kind of yes, but that is definitely not the end result we want, this is still a work in progress. We still need to listen to a port in our machine to serve our request, for that we will be using a http_listener.

int main(){
    auto listener = http_listener("http://0.0.0.0:8080/");
    listener.open().wait();
    listener.support(web::http::methods::GET, echo_handler);
    std::cout << "Press Enter to exit." << std::endl;
    std::cin.get();
    listener.close().wait();
}
creating a listener

Here I just opened a listener to listen to port 8080 on my machine and added the echo handler to handle the GET requests made to the / path. The listener is asynchornous which means it won't block the executing thread like what we have in larger frameworks where when we start listening we block the executing thread. And thus we need to manually block it with std::cin.get() which waits till we press return.

If you are wondering what are the headers that I included, it would be <cpprest/http_listener.h> and of course <iostream>. Also I used using web::http::experimental::listener::http_listener; to avoid writing the whole thing for the listener.

Time to build and test it out. If you don't know how to use vcpkg, clone this repo, run the bootstrap script and pass the vcpkg.cmake as CMAKE_TOOLCHAIN_FILE while calling CMake. It will take care of downloading and building the rest, no need to install cpprestsdk manually.

To test the executable, you can just do a curl localhost:8080 if you are on Linux or MacOS and It works should be printed. For windows I suppose it was something like Invoke-WebRequest localhost:8080 (this could be wrong!!) though you can always put localhost:8080 in the browser and test.

Time to up the game, the handler we wrote above is definitely not an echo handler, cause an echo handler would return whatever we throw to it as response. Lets make it that way then.

void echo_handler(web::http::http_request req) {
    auto s = req.relative_uri()
                .to_string()
                .substr(1) + "\n";
    req.reply(web::http::status_codes::OK, s);
}
echoing the relative URI

What it does is, fetches the relative URL, which would be /it/works for something like localhost:8080/it/works, removes the initial forward slash and appends a line break, returns that as a response. Keep in mind if my listener was in localhost:8080/it the relative URL would be just /works.

Something dynamic finally, can we do better? of course we can, instead of returning plain text we can return JSON.

void echo_handler(web::http::http_request req) {
    auto s = req.relative_uri()
                .to_string()
                .substr(1);
    auto response = web::json::value{};
    response["value"] = web::json::value::string(s);
    req.reply(web::http::status_codes::OK, response);
}
returning json response

A JSON value and object is interchangeable here, I think, I just settled down to value since I couldn't write to an object. Anyway, for something like localhost:8080/hello it would return,

{
    "value" : "hello"
}
json response

Lets make it a bit better, instead of returning the path, we will return the query string as a JSON object,

void echo_handler(web::http::http_request req) {
    auto params = web::uri::split_query(req.request_uri().query());
    auto response = web::json::value{};
    for(const auto [key, value] : params) {
        response[key] = web::json::value(value);
    }
    req.reply(web::http::status_codes::OK, response);
}
return query string as json object

So now for a request like localhost:8080?a=yes&b=no it would return,

{
  "a": "yes",
  "b": "no"
}
json response

Pretty good, remember to wrap the url in quotes if you are using curl, cause shell would treat the stuff after & as a separate command.

But how is it "Cloud Native"? Well, cloud is just someone else's computer and cloud native means your program shouldn't have any problems running in someone else's computer. This is just an overall idea, with it comes stuff like you should be able to switch computers easily, add new computers if necessary. By no means I am expert with those stuff, so do consult someone who has a better understanding.

Before going ahead, lets change up the way we are blocking the main thread, we can do far better than waiting for the user to press return like waiting for interrupt signal aka pressing Ctrl+C.

First we need an infinite while loop which would depened on a boolean. And when interrupt signal will be called we would just flip the boolean.

int main() {
    .
    .
    .
    std::signal(SIGINT, [](int){ stop = true; });
    while(!stop);
    listener.close().wait();
}

using std::signal with infinite while loop

Here stop is a global boolean, since globals are bad and we not into creating classes these time we would put that into an anonymous namespace as a good practice. Also don't forget to include <csignal>.

namespace {
    auto stop = false;
}
global boolean under anonymous namespace

So what is cloud native without containers? and when we say containers, I am sure we only mean docker. Lets dockerize our echo service then. For that I would shamelessly copy the Dockerfile from here, with some of my changes of course.

FROM alpine:latest as build

LABEL description="Build container"

RUN apk update && apk add --no-cache \ 
    autoconf build-base binutils cmake curl file gcc g++ git libgcc libtool linux-headers make musl-dev ninja tar unzip wget zip pkgconf

ENV VCPKG_FORCE_SYSTEM_BINARIES 1

RUN cd /tmp \
    && git clone https://github.com/Microsoft/vcpkg.git \ 
    && cd vcpkg \
    && ./bootstrap-vcpkg.sh -useSystemBinaries 

COPY . /service
WORKDIR /service
RUN mkdir out \
    && cd out \
    && cmake .. -DCMAKE_TOOLCHAIN_FILE=/tmp/vcpkg/scripts/buildsystems/vcpkg.cmake -GNinja \
    && ninja

FROM alpine:latest as runtime

LABEL description="Run container"

RUN apk update && apk add --no-cache \ 
    libstdc++

COPY --from=build /service/out/service /usr/bin/service

WORKDIR /usr/bin

CMD ./service

EXPOSE 8080
Dockerfile 

If you are not comfortable with alpine, feel free to switch to something like Debian. If you are using Debian and are on a Linux machine, this could be simple reduced to,

FROM debian:latest

LABEL description="Run container"

COPY ./build/service /usr/bin/service

WORKDIR /usr/bin

CMD ./service

EXPOSE 8080
Dockerfile for debian instead of alpine

Assuming you are building the excutable in the build folder. After that it is just build and run.

docker build -t <name> .
docker run -p 8080:8080 <name>
docker commands for building and running the container

It would be a shame, if we have come this far and did't touch Kubernetes while talking cloud native. So gear up and install kubectl & minikube on your machine. Added to that create an account in docker hub. And then login through docker desktop, if you are on linux docker login would take you the right way. Next up, tag your image properly,

docker tag <name> <username>/<name>
tagging the container

Here the name is the one which you passed while building the image and username is your username from docker hub. Then just push with docker push <username>/<name>.

Though writing a configuration for deployment doesn't make sense for this, we will still go ahead and write one,

apiVersion: apps/v1
kind: Deployment
metadata:
  name: <deploymentname>
spec:
  replicas: 1
  selector:
    matchLabels:
      app: <applabel>
  template:
    metadata:
      labels:
        app: <applabel>
    spec:
      containers:
        - name: <containername>
          image: <username>/<name>
          ports:
            - containerPort: 8080
              name: http
kubernetes deployment file

Just keep in mind that <username>/<name> should be same as the image you pushed to docker hub. Spin up a local cluster with minikube start and check with kubectl get nodes that it is up or not. After that just pass the above deployment yaml to kubectl.

kubectl create -f <deploymentyaml>
creating kubernetes deployment

Now if you do a kubectl get pods you would see something like,

NAME                                 READY   STATUS    RESTARTS   AGE
service-deployment-bdf6767f7-9wg9b   1/1     Running   0          10m
output of kubernetes get pods

Time to test this out, lets forward the 8080 port from the pod to our machine with,

kubectl port-forward <podname> 8080:8080
port-forwarding from pod

The <podname> is the one which you got from the previous command. Now if trying out the API works, this means we have successfully made the cloud native echo service in C++.

Questions and suggestions are always welcome, feel free to tag or dm me on Twitter @hellozee54 or even better if you poke me @hellozee on freenode.