Example Use Cases

Llama Logs was designed to be versitile and give you the control of your systems can be visualized.

The examples below show how Llama Logs can help you can gain insights never before available for your systems.


All example codes are shown in the NodeJS client.

However, the Llama Logs client API is the same across all supported languages



1. A Window into Complex Architectures


Many system architectures grow complex over time and require manually created diagrams to document.

Llama Logs let you automatically graph all the complexity through code.


The graph above simulates a system with an API endpoint that passes requests to microservices through Kafka.

The microservices then each make requests to a shared data provider.

Finally the microservices save their output to separate databases.


Normally, accurate documentation would require wiki pages and graphs updated for each change in the system.

With Llama Logs the entire architecture is graphed automatically and all the traffic patterns are available for live inspection.


API Request Handler

function handleApiRequest(requestData) {
    LlamaLogs.log({sender: "User", receiver: "API Handler"})
    ...Processes Request
    LlamaLogs.log({sender: "API Handler", receiver: "Kafka"})
    ...Send to Kafka
}

Microservice One

function kafkaRecipientHandlerOne(requestData) {
    LlamaLogs.log({sender: "Kafka", receiver: "Service Handler 1"})
    ...Processes Request
    LlamaLogs.log({sender: "Service Handler 1", receiver: "Data Micro Service"})
    ...Make call to a data providing micro service
    LlamaLogs.log({sender: "Data Micro Service", receiver: "Service Handler 1"})
    
    LlamaLogs.log({sender: "Service Handler 1", receiver: "Database 1"})
    ...Send to DB
}

Microservice Two

function kafkaRecipientHandlerTwo(requestData) {
    LlamaLogs.log({sender: "Kafka", receiver: "Service Handler 2"})
    ...Processes Request
    LlamaLogs.log({sender: "Service Handler 2", receiver: "Data Micro Service"})
    ...Make call to a data providing micro service
    LlamaLogs.log({sender: "Data Micro Service", receiver: "Service Handler 2"})
    
    LlamaLogs.log({sender: "Service Handler 2", receiver: "Database 2"})
    ...Send to DB
}

2. Identify Issues by Individual Machines


When your application is distributed, aggregated logging can often mask issues with individual machines.

Llama Logs lets you group components by any trait, so traffic to machines can be tracked individually.


Below is example code tracking a set of servers behind a load balancer.

By logging each machine id dynamically, the load balancer distribution can be visualized.

By visualizing the traffic, it is clear that the load balancer is not evenly distributing the traffic.


Web Server

function onMachineHandler(requestData) {
    LlamaLogs.log({sender: "User", receiver: "Load Balancer"})
    LlamaLogs.log({sender: "Load Balancer", receiver: context.MachineId})
    ...Processes Request
}

3. Track Errors Throughout a System


Llama Logs lets you dynamically identify and visualize errors.


Below is a simple example of identifying errors in web requests.

The code dynamically determinines if the Llama Log should be marked as an error.

It also attaches the error causing request params.


The Llama Logs graph then lets you easily inspect the resulting errors and see what params were related.

To allow for high throughput, the error messages are sampled in Llama Logs and not recorded 1:1.


Web Server

function handleRequest(requestData) {
    try {
        LlamaLogs.log({sender: "User", receiver: "Request Handler"})
        ...Processes Request
        LlamaLogs.log({sender: "Request Handler", receiver: "User", isError: false})
    } catch (e) {
        const brokenParams = JSON.stringify(requestData)
        LlamaLogs.log({sender: "Request Handler", receiver: "User", isError: true, message: brokenParams})
    }
}

4. Web Server Showing Traffic to Each Page


At its core, Llama Logs helps you visualize traffic in a system.

The code below is an example of a web server dynamically using the request path to generate a graph showing page traffic.


Web Server

function requestHandler(path) {
    LlamaLogs.log({sender: "User", receiver: path})
    ...Rest of page handler code
}

5. Identify Which Consumers are Creating Traffic


Llama Logs lets you flip the flow of information for microservices.

By putting a Llama Log in the caller of a service, you can trace where the traffic is originating.


The code below shows an example User Info Microservice that is being called by 3 separate Modules.

Using the Llama Logs graph, it can easily be identified which modules the heavy traffic is coming from.


Validation Handler

function validateUser(userData) {
    LlamaLogs.log({sender: "Validation Module", receiver: "User Info Service"})
    UserInfoService.validate(userData)
}

Update Handler

function updateUser(userData) {
    LlamaLogs.log({sender: "Update Module", receiver: "User Info Service"})
    UserInfoService.update(userData)
}

Relation Handler

function addFriendToUser(userData) {
    LlamaLogs.log({sender: "Friend Module", receiver: "User Info Service"})
    UserInfoService.addFriend(userData)
}

6. Logically Group IOT Fleets


Since LlamaLogs lets you control the identity of components in your graphs, sets of machines can be logically grouped.

Below is example code showing how you can identify incoming traffic from different fleets of IOT devices.

Since each device will know its fleet id, the same Llama Logs code can be deployed across the board and then dynamically evaluated.


IOT Application

function phoneHome() {
    const fleetId = context.fleetId
    LlamaLogs.log({sender: fleetId, receiver: "Central Server"})
    sendDataToCentralServer()
}

7. Visualize Serverless Cloud Services


Some cloud services can be difficult to debug as no physical server exists to inspect.

Llama Logs can help you better understand your cloud operations by visualizing serverless events.


Below is example code for two serverless functions.

One shows adding an item to a cloud queue service. The other shows the handler used to process items off the queue.

By visualizing the incoming and outgoing events, it is clear that the handler is not pulling events off of the queue fast enough.


Queueing Function

function addToQueue(requestData) {
    LlamaLogs.log({sender: "Intake Module", receiver: "Cloud Queue"})
    CloudQueue.push(requestData)
}

Queue Handler

function queueHandler(queueData) {
    queueText = JSON.stringify(queueData)
    LlamaLogs.log({sender: "Cloud Queue", receiver: "Handler Module", message: queueText})
    ...Processes Queue Data
}

8. Track Deployment Roll Outs


Program metadata can be included in the Llama Logs to track deployments

The code below identifies the component with the software version included.

The resulting graph then shows that in this deployment about 25% of requests are going to the new version.


Web Server

function handleRequest(requestData) {
    const deployedVersion = context.version
    LlamaLogs.log({sender: "User", receiver: `Request Handler ${deployedVersion}`})
    ...Processes Request
}