Succeed With CMS and VQ Conference Manager

Overview of VQ

What is it VQ does?

VQ is enabling enterprise wide conferencing, based on the CMS platform. We wrap the CMS platform with increasing rich layers of functionality that enable customers to deploy conferencing within a wide scale within an organisation and one of the key things is to do that quickly. There is no requirement, for example to break open the CMS API SDK. You can do everything through a browser; you can get those CMS systems into production mode far more quickly and deliver high volume services on them and ultimately deliver value to your customers. We have a heard of a number of cases where customers have purchased CMS and because of the complexity of using the API those systems have remained in their boxes for one or two years.

So, we’re enabling CMS enterprise wide conferencing and as the following diagram shows we’ve got CMS in the core and layers coming around that, for example, meeting management for concierge services, multi-tenancy, scheduling, single sign on and reporting to call out a few, and then in the outer layer we have applications which are geared towards the end user and help them take control of their conferencing destiny. They can see their conferencing from where they work, be that Jabber or Outlook, for example.

Why do customers come to VQ?

We’ve got the integrated set of tooling that the diagram above shows. So, customers who are coming from the Codian world, (because Codian is going end of life), they can use the meeting management that VQ provides. We’ve also got the tooling that supports the self-service model. For those of you who may not know, the CMS platform was designed to scale and deliver large volumes of users. We provide the tooling that enables you to provision several thousand of this type or several thousand of that type and each group has the right set of calls for their requirements. That automation of provisioning is key because it ultimately enables the big systems (the 25,000+ user systems) to be deployed with actually very few staff because we’re trying to make software do the work. We’ve also got the integrated Elastic search and Kibana reporting so people understand what their system is doing. We’ve also got the increasing range of self-service tools that sit around this in the form of Jabber extension, Outlook Add-in and Plug-in and our IOS phone app. And we’ve also added Single Sign-on and this is significant because it’s secure and it enables two factor authentication (2FA). One of the areas we’ve been successful in is Federal Government.  2FA is quite unique to us and has enabled us to make significant design wins in that particular vertical market.

What are the principle challenges customers are facing?

There are many, but the broad vectors are as follows.

Self-service; customers want to enable employees across the organisation. There may be several 100 or thousand users, (we also have 50,000 and even 80,000 user opportunities that we are following). We are enabling end users to work across the organisation and with the impact of Coronavirus there are a large number of companies who are looking to enable their workforce to work from home. This is enabled by the scalability of CMS and what VQ enables. Self-service is a big thing and it’s what CMS was designed to do and it does it incredibly well.

The other vector we see is the Codian/Telepresence blades are going end of life at the end of May, so many customers are looking to replace those. CMS is the answer and we provide a richness that enables those Codian users to migrate their workloads onto the way VQ enables meetings to be managed.

Security is another vector and we’ve added Single Sign-on and SAML2 authentication and This is significant because it includes 2FA and certainly when you get to the Federal (DoD), intelligence community or government areas, that 2FA is a key enabler.

The final vector Is Reporting and Logging and we use Elastic search and Kibana to deliver best of breed reporting to customers in terms of what their systems are doing.

We enable each of these particular vectors to be addressed and what we find is quite often customers start off with a particular set of requirements and then begin to think how their services will evolve and start to appreciate that VQ has got the breadth of functionality so we can take them from where they are in the early days, through to more sophisticated deployments.

Customers are looking for a solution where rather than doing one thing they can buy a solution which will start with one thing and grow out and solve a range of evolving needs. That has proved very successful.

How to setup a watcher to link to a dashboard in Kibana (Part 2/2)

(This is the second part of the documentation, and it is built on what is explained in the first part. Click here to read part 1)

Objectives of this blog post:

Describe the steps to follow to detect a condition and then generate a mail/message with links to dashboards. Those can be configured to display different time periods, and in this case, it is a 10 minutes window centered on the triggering event. The example contains a single dashboard; it could, however, contain many and each dashboard might contain many visualizations – the key concept that all of the data required to support any analysis of the event trigger is delivered to whoever needs to look at it. For example, rather than an email, it could be pushed to Teams, Slack or Service Now.

Step 1: Create a dashboard that displays relevant data

Documentation for creating dashboards

First of all, you need to create a Kibana dashboard that will display the information you want to see once the watcher alert is triggered. Then, through the watcher actions, we will adjust the time range to focus on what happened around the moment the error was found.

Once you have created your dashboard and added the components you desire, make sure the time range is on absolute mode. You then need to copy the URL of the dashboard from the address bar in your browser, and save it somewhere. It should contain a part that looks similar to this, and it is what we will modify in the watcher:

…time:(from:’2020-01-29T16:48:54.911Z’,mode:absolute,to:’2020-01-29T16:49:54.911Z’)…

Step 2: Transform the watcher previously created

Documentation for payload transforms

In order to replace the timestamps used in the URL by the ones retrieved from the Kibana search, we will need to modify the structure of the watcher to utilize variables. We will transform the payload (result of the watcher searches) to create a new one containing the data we want to use in the Email action.

“transform”: {

    “script”: {

      “source”: “def[] items; def firstSearchHits = ctx.payload.first.hits.total; def secondSearchHits = ctx.payload.second.hits.total; def fromTime = ctx.execution_time.plusSeconds(-300); def toTime = ctx.execution_time.plusSeconds(300); def timeFirstError = ctx.payload.first.hits.hits.0._source.timestamp; items = new def[] {firstSearchHits,secondSearchHits,fromTime,toTime, firstErrorTime}; return items;”,

      “lang”: “painless”

    }

}

We have now defined a time range that starts 5 minutes before the moment the watch was triggered, and ends 5 minutes after it was triggered.

After this operation, the newly transformed payload contains an array of values (defined in items), and those can be acceded by using ctx.payload._value.{index_in_array}.

The transform block is executed after the search (after the first payload is created), but before any other actions. Therefore, we need to update the previous references to the payload in order to use the new ones.

Step 3: Utilize the new payload data

After the transform, we have an array of data containing: {firstSearchHits,secondSearchHits,fromTime,toTime,firstErrorTime}

We can now replace the previous references to the payload data. In our case, it concerns the Email Action condition, and the information returned to the user in the email.

Here is the change for the action condition:

“source”: “return ctx.payload.first.hits.total > 0 && ctx.payload.first.hits.total < 30 && ctx.payload.second.hits.total != 30”,

TO

“source”: “return ctx.payload._value[0] > 0 && ctx.payload._value[0] < 30 && ctx.payload._value[1] != 30”,

(note: In the painless scripts, you access the values in the array by doing _values[index]. In the rest of the watcher, you have to use _values.index. In both cases, the index starts at 0)

In the body of the email, we can now include useful data easier, for example:

“text”: “The watcher has detected CDR Connection Failure errors. The first error happened at: {{ctx.payload._value.4}}\nThere are {{ctx.payload._value.0}} hits in the first 30s after {{ctx.payload._value.2}}, and {{ctx.payload._value.1}} hits in the 30s period before the search.”

We can also include the previously created dashboard URL, by modifying it with the new values. The change should look like this:

…time:(from:'{{ctx.payload._value.2}}’,mode:absolute,to:'{{ctx.payload._value.3}}’)…

(note: It is currently impossible to create a shorter version of the link)

The email received by the user should look like this, and contain a link to the dashboard displaying the correct time range:

How to set up a watcher to detect CDR Connection Failure errors in Kibana (Part 1/2)

Objectives of this blog post:

Describe the steps to follow to configure and create a simple watcher that detects a condition and sends an email when triggered. This document focuses on a real user case, the monitoring of CDR Connection Failure errors, which can be used as an example and base for different applications.

Watch the video

Step 1: Get an Elastic Stack subscription license

The watchers are part of the Elastic Stack subscription license, and therefore it is needed before doing any alerting.

Step 2: Set up notifications

When the watcher triggers, you can choose to add an action to send a notification that an event was fired. You can then precise the message you want to send, but first, you need to setup the notifications.

–       Email notification:

Documentation for email notification settings

Documentation for different email profiles

To configure the SMTP server you want to use and the email account, go to the “Dev Tools” tab in your Kibana page, and input the following request:

PUT _cluster/settings

{

    “persistent” : {

      “xpack.notification.email” : {

        “account”: {

          “NAME_OF_ACCOUNT”: {

            “smtp”: {

              “host”: “SMTP_HOST”,

              “port”: “SMTP_PORT”

            }

          }

        }

      }

    }

}

If an authentication is needed, you can specify an smtp.user and a smtp.secure_password.

–       Slack notification:

Documentation for slack notification settings

Documentation for slack webhooks

You need to create a webhook for your slack system. Follow the steps described in the link above (you need administrator privileges). Once that is done, go to the “Dev Tools” tab and input the following request:

PUT _cluster/settings

{

    “persistent” : {

        “xpack.notification.slack” : {

          “account”: {

                  “ACCOUNT_NAME”: {

                    “secure_url”: “SLACK_WEBHOOK_URL”,

                      “message_defaults”: {

                          “from”: “Kibana Watch”,

                          “to”: “DESTINATION”,

                          “icon”: “http://example.com/images/watcher-icon.jpg”,

                          “attachment”: {

                              “fallback”: “X-Pack Notification”,

                              “color”: “#36a64f”,

                              “title”: “X-Pack Notification”,

                              “title_link”: “https://www.elastic.co/guide/en/x-pack/current/index.html”,

                              “text”: “One of your watches generated this notification.”,

                              “mrkdwn_in”: “pretext, text”

                          }

                      }

                  }

              }

          }

    }

}

Step 3: Create the watcher

Go to the Management>Elasticsearch>Watcher tab, and create a new watcher.

You can use the simple “Create Threshold Alert” option, and choose the action you want to link to the alert created.

You can also use the “Create Advanced watch” option, where you can customize more in detail your watcher.

Recommendation:

The window provided by the Kibana page to edit the watcher is really small and inconvenient. It is highly recommended to use software to edit the watcher that can display .json files properly. It is preferable to have an editor that can manage the indentation and color the text for better readability. Those can include: text editors (ex: notepad++) or IDEs (ex: Visual Studio). You can then edit the file in your editor with ease, and paste it back into the Kibana window to save it.

Here is a complete example of a watcher that tracks CDR Connection Failure errors every 30 seconds, by detecting the first time the error occurs and sending an email with the time it was detected:

{

  “trigger”: {

    “schedule”: {

      “interval”: “30s”

    }

  },

  “input”: {

    “chain”: {

      “inputs”: [

        {

          “first”: {

            “search”: {

              “request”: {

                “search_type”: “query_then_fetch”,

                “indices”: [

                  “cmsalarmstates-*”

                ],

                “types”: [],

                “body”: {

                  “query”: {

                    “bool”: {

                      “must”: {

                        “match”: {

                          “cmsalarm.type”: “cdrConnectionFailure”

                        }

                      },

                      “filter”: {

                        “bool”: {

                          “must”: {

                            “range”: {

                              “timestamp”: {

                                “gte”: “now-1m”,

                                “lte”: “now-30s”

                              }

                            }

                          }

                        }

                      }

                    }

                  }

                }

              }

            }

          }

        },

        {

          “second”: {

            “search”: {

              “request”: {

                “search_type”: “query_then_fetch”,

                “indices”: [

                  “cmsalarmstates-*”

                ],

                “types”: [],

                “body”: {

                  “query”: {

                    “bool”: {

                      “must”: {

                        “match”: {

                          “cmsalarm.type”: “cdrConnectionFailure”

                        }

                      },

                      “filter”: {

                        “bool”: {

                          “must”: {

                            “range”: {

                              “timestamp”: {

                                “gte”: “now-90s”,

                                “lte”: “now-60s”

                              }

                            }

                          }

                        }

                      }

                    }

                  }

                }

              }

            }

          }

        }

      ]

    }

  },

  “condition”: {

    “always”: {}

  },

  “actions”: {

    “send_email”: {

      “condition”: {

        “script”: {

          “source”: “return ctx.payload.first.hits.total > 0 && ctx.payload.first.hits.total < 30 && ctx.payload.second.hits.total != 30”,

          “lang”: “painless”

        }

      },

      “email”: {

        “profile”: “standard”,

        “from”: “watchtest@test.com”,

        “to”: [

          “test@vqcomms.com”

        ],

        “subject”: “Watcher Notification”,

        “body”: {

          “text”: “CDR Connection Failure : ({{ctx.execution_time}})”

        }

      }

    }

  }

}

In this case, the CMS server produces an error if the CDR receiver cannot be reached. This happens every second until the problem is resolved. We want to set up a watcher that triggers the first error but doesn’t continue to send notifications after the first one.

In order to do so, this watcher uses chained searches:

  • The first one searches through the logs in the time range [now-60s ; now-30s] to make sure no new logs are missed.  We verify that at least one error is found, but less than 30, which is the maximum number of errors over that period (1error/s).  This is to avoid triggering the action multiple times after the first detection of the error if the error keeps being sent regularly.
  • The second search uses the time range of the previous iteration of the watcher, so in this case 30s earlier [now-90s ; now-60s]. The second search is used to verify if the error was already present 30s before the current search. If it is, it has been handled already and we do not need to send another notification.

Here is the “painless” script that defines the condition to decide if we send a notification email or not:

return ctx.payload.first.hits.total > 0 && ctx.payload.first.hits.total < 30 && ctx.payload.second.hits.total != 30

In the second part of this documentation, we will see how to add a link to a dashboard covering the 10 minutes window around the triggering event.

Read part 2 of this blog

VQ Conference Manager 3.3

We’ve released 3.3 today and it’s now available for download from vqcomms.com.

3.3’s significant in that it demonstrates the value of some of the decisions we made on the 3.x platform. The adoption of a standards based authentication process based on a best of breed Identity Server which was, in turn, enabled by our decision to move to a Kubernetes architecture based on containers meant that the reality of getting ADFS to work was far less daunting than it might otherwise have been. It turned out to be surprisingly easy; we had to make a very small code change.

Adding a Jabber Add-in is the same; we were able to repackage a lot of what we have for the Outlook Add-in. So…..if you have ideas on adding conference control to other applications or services that support a browser based UI, please drop me a mail.

The UI changes might appear small but they’re the pre-cursor to what happens next….lots of work under the hood and we’ll now slowly start introducing new functionality and coolness. The big one planned for 3.4 is participant move and following that, pane placement.

Other cool stuff that you don’t really see are things like use of Ansible (https://www.ansible.com/) which is significantly helping and makes some of the complicated stuff that’s required for High Availability in future releases possible. Anybody who’s interested in Ansible, please also drop me a mail.

So, 3.3, another really good step forwards. Enjoy.

VQ Conference Manager 3.2

I’m really pleased to announce that VQ Conference Manager 3.2 is now available for download.

3.2’s an exciting release because of what’s in it and, moving forward, what it enables.

What’s in it? Lots of refinements:

  • The data written into Elastic and been restructured to make it easier to produce visualizations. Goodness like CMS Alarms are now available. We’ve updated all the Visualizations and Dashboards.
  • The Outlook Add-in now caches the login name so users don’t have to repeatedly type it in (passwords are still required in non SSO mode)
  • We’ve added support for Cisco Duo to the list of supported SAML2 providers.
  • The Blast-Dial app now includes an optional “press-1 to join” message when each of the Blast-Dial recipients receives a call.
  • CM-Admin refinements; changed certificates and Outlook Add-whitelists in SAML2 mode (to name but 2).
  • Bug fixes to the Call CoApp. Placing outbound calls from an inactive Space now works consistently.
  • The LDAP Configuration page has been cleaned up and restructured into a page with 3 tabs; much easier to use
  • The Bulk-Emailer contains a second template that includes functionality to auto-configure the iOS app. Very cool.
  • Bug fixes

What it enables:

  • One of the big ‘under the hood’ changes is upgrading Kubernetes to the latest version (1.15). 1.15 includes the Beta High Availability (“HA”) support. Please note that this is not VQCM High Availability at 3.2; it’s the enabler that will (should) enabled HA in the next release of VQCM (3.3; due Q4/2019).
  • With the Elastic data  restructuring just about complete, we also plan to enable syslog ingest in 3.3 (due Q4/2019) which we’re also really excited about. The goal here is that we’ll be able to save syslog data in Elastic and, by doing so, be in a position to include CMS/VCS logging as part of the data contained in visualizations/dashboards.

More information is available in the release notes which are available for download from vqcomms.com. Please ensure you select version 3.2.

Regards

Mike

Releasing VQ Conference Manager 3.1

3.1 is available for download from vqcomms.com; there’s a sense of relief here at VQ Towers that we finally delivered it. A huge amount of work went into 3.1 and it represents another big step forward. Some of the functionality might be described in small words (for example, Single Sign-on or “SSO”) but behind the scenes a lot of work went into getting the Open Id Connect (“OIDC”) Identity Server up and running; not only do we get SAML2 and Windows Authentication with it,  we can also now authenticate with services such as Google. It’s been quite an eye-opener to get exposed to state of the art authentication and the services provided by companies such as Okta (and their partners, JAMF and Yubico). From a technical perspective, I must admit I also found it remarkable how clean the mechanisms are and how, for example, access tokens are cryptographically signed and the consuming software can use the “time to live” value to know when the token expires. Very neat.

3.1 is an example of what is possible because of the architectural decisions we made for the 3.x platform; the OIDC certified Identity Server is hosted as a Container and is packaged as part of the VQCM 3.1 VM. Reactive Calls (aka Blast Dial) is another example of what’s now possible; it’s packaged as a small service that sits on the message bus and responds to events generated by the VQCM core.

Here’s a very high level overview of what’s in 3.1:

  • SAML 2 and Windows Authentication Single-Sign-On
  • Reactive-Call (aka Blast-Dial) functionality
  • Outlook plug-in
  • iOS App
  • Major update to Kibana dashboards and reporting
  • URI and Call Id generators allowing auto creation of URIs and Call Ids. Addition of a “Random” Generator
  • Support for Secondary Call Ids
  • Control Space creation, updates and deletes via the UX Profile
  • Disable the green + bar
  • Addition of a ‘now’ button to the schedule new meeting datetime picker
  • The VQCM user interface can be branded
  • Updates to the latest component versions (Centos, Kubernetes and Elasticsearch)
  • Space Template refinement to control participant level for join/leave tones

We hope you like it.

Regards

Mike

VQ’s Growing Number of Unified Communications (“UC”) Apps

We’re increasingly thinking of VQ Conference Manager in terms of it being a platform that enables UC problems to be solved. With the base functionality stable and delivering really big workloads, we’re now starting to introduce more of what we’re calling “UC Apps”. These sit outside the VQCM web user interface and were designed to make it easier for end users to make more calls because they have higher degrees of control or simply because we made it easier for users to join calls.

With 3.1 heading towards beta status, we’ll be introducing the following new apps:

  • Outlook plug-in
  • iOS phone app
  • Blast Dial (also known as Reactive Calls)

These join the Outlook Add-in launched with the initial version of 3.0. The UC apps get even better in 3.1 for customers with Single Sign-On because the UC Apps also support SSO! No more having to remember passwords and one less barrier to adoption.

Blast Dial is headless for 3.1 in that it does not have a user interface; you configure it via a config file. Blast-Dial allows a Space to be defined where if somebody calls into the Space, the Space automatically calls out to a predefined list of attendees. Ideal for use in environments where a team needs to respond to an event.

A variant of the Outlook Add-in we’re playing with at the moment works with Google G Suite. Again, it’s enabled because VQCM 3.1 supports Open ID Connect “OIDC” which is the authentication protocol used by services such as Google. I’ll keep you updated on how that particular skunk project unfolds.

Behind the scene, more are coming. What’s also interesting is how a further subset of “utilities” or tools are evolving to meet customer requests to solve problems they face. These typically are run from a command line  and use VQ’s REST API; they solve problems customers are facing in a tactical manner – we can get them done quickly because they sit outside the core VQCM product. Here’s a couple of examples:

  • SetSpacePin; a command line utility that works with a customer’s PowerShell scripting. The PowerShell script listens on an Exchange mailbox and allows users to request details for their Space or send a calendar request to an Exchange mailbox; if the calendar request contains a specific keyword, the PowerShell script calls the command line tool and sets the Pin/Passcode on the User’s Space. It’s a concept at the moment and the jury’s still out on whether the customer will deploy it. Initial feedback is positive.
  • Another example is from last week; a customer would like to enable streaming on specific Spaces and set the Streaming URL. We’re putting together a small utility that’ll allow their operators to do this and therefore avoid the pain of doing it via the Postman API.

Having said all that, I do need to set the correct expectation. VQCM 3 is enabling us to start thinking in terms of a platform and UC Apps. Our APIs are not yet public; they need a lot of love to iron out inconsistencies and they’re not documented. Work is underway to address that (and it looks pretty nifty) but it’s going to be some time before we go public with it.

If this blog has triggered any thoughts about what UC Apps (with or without UI) that you think would help solve a UC problem you’re experiencing, please drop us a line.

VQ Conference Manager’s “get out of jail” dial-plan tooling

VQ Conference Manager’s dial-plan tooling and functionality has been driven by customer requests for help to ensure that the Call Ids and URIs generated by VQ fit within the dial-plan of the hosting organization.

The initial requests were simple: can we make it possible to configure the prefix that’s used for each auto-generated Call Id.

That was followed by the “Auto-Increment” keyword that allowed URI and Call Id values to be generated during the LDAP Import process; some systems didn’t have LDAP/Active Directory attributes that could be used to import these values from and the delivery team were not able to get the LDAP/AD schema changed (a not uncommon situation with these types of request). The auto-increment values were of defined length; they could be prefixed or post-fixed with values which we were able to identify and therefore allow customers to change the prefixes but maintain the auto-increment values – this particular ability saved one large bank with a mid-service update to their dial-plan.

More recent ones have been more sophisticated; we added a Secondary Call Id field on the LDAP Configuration page to allow “short” Call Ids to be defined. The particular customer in question had a large user base and very high call volumes. Audio Conferencing users were complaining that the Call Id format that had been designed into the solution was user-unfriendly and they were having to enter Call Ids that were too long. We added a secondary Call id that consisted of last 5 digits of the Call Id (via an LDAP attribute transform); the change retained compatibility and also allowed users to use the short Call Ids when joining calls.

The latest change available in 3.1 (coming very soon) is another really cool one (it’s actually several):

We’ve taken the Auto-Increment concept from the LDAP Config page (to generated Call Id and URI values) and added support for it to the Space Template page. The Space Template page also supports a new Call-Id/URI Generator called “Random” that, as the name suggests, enables the generation of less easily guessed values (auto-increment always generates a value of the previous value plus one). Where this gets really cool (and massively user-friendly) is when a user comes to generate a new Call or Space based on the Space Template, the Call Id and URI values are automatically inserted. Because the auto-increment and Random keywords can be prefixed and post-fixed with additional information, the administrator can define the exact URI values that will be generated. The URIs will never clash and the user will never have to think of the value to use. VQCM will also delete the Space (and URI) after the call completes.

Moving forward, we expect to add more Generator keywords to perform specific tasks.

Introducing Elastic Stack (formerly Elastic X-Pack)

For those of you who may not know, here at VQ Communications we are really pleased to announce that we are now an OEM partner of Elastic. This means that you can now buy Elastic Stack (formerly called X-Pack) from VQ to work with your VQCM solution. This means that not only do you get the richness Elastic Search offers but also benefit from additional options including, amongst others, reporting, setting threshold values of specific data, running SQL queries against Elastic Search and more.

Mike Horsley, CEO at VQ Communications outlines why VQ made the strategic decision for the VQCM 3 platform to move all logging and data capture into the Elastic Search database and provide Visualisation via Elastic’s Kibana tooling; what this means for you and why Elastic Stack’s additional optional functionality may be of importance to you.

Read on to find out more.

We made the strategic decision for the VQ Conference Manager 3 platform to move all logging and data capture into the Elasticsearch database and provide visualization via Elastic’s Kibana tooling.

We’re now about 6 months into having VQ Conference Manager 3 deployed in the field and I’m incredibly pleased (and relieved) by the massively positive feedback on the decision; I have been amazed how many customers (and potential customers) have said they’re already using Elasticsearch and Kibana within their organization.

Our initial goal with moving to Elasticsearch and Kibana was to use best of breed, industry standard, tooling to capture and enable the visualization of logging and reporting data coming out of VQ Conference Manager 3. We are so committed on the decision to include Elastic as part of VQ Conference Manager that we signed up to become an OEM partner of Elastic; this involved a fairly substantial $ spend over the next three years – it does mean, however, we get support from their support teams and have been able to resolve problems quickly.

I have to admit though that there were times during the VQ Conference Manager 3 development process I worried we’d made a mistake when we were fighting with a whole raft of issues and nothing seemed to be working; there was a (quite long) period where we seemed to go backwards more than we moved forwards. However, as things started to stabilize and I started to understand how Elastic worked (and we had updated the VQ Conference Manager core to start generating the appropriate data), we started to put queries, visualizations and dashboards together that yielded results that were way in excess of our expectations, providing analysis and insight into issues that would have required huge amounts of manual work in previous versions of the product. We were able to reduce complex (and apparently random) issues down to easily digestible graphs; the problems became defined, well understood and from that, resolvable. From that point on, I was a fully paid-up member of the “Elasticsearch and Kibana is awesome society”.

Building in that experience, we moved forward (at this point, VQ Conference Manager 3 was starting to work reliably – again, I’m sure you can imagine this but that was quite a relief. I started sleeping again) and expanded out the set of Dashboards available in 3.0; as with the internal analysis, the insight we gathered into calling patterns, what the system was doing etc. was way beyond what we’d been able to do in previous versions of Acano Manager. Love blossomed.

VQ Conference Manager 3 is based on a really powerful set of technologies – Kubernetes and a concept called “Containers”. Containers are the things that contain the software components that do the work; Kubernetes is the thing that makes them all together (the so-called orchestration layer). So, in VQ Conference Manager 3, we have a whole bunch of containers – some containing our VQ Conference Manager software and others containing things like the database, Elasticsearch and Kibana. The brilliance is that we can take off the shelf Containers and host them with the VQ Conference Manager service orchestrated by Kubernetes. Each container is, essentially, its own lightweight virtual machine (see:https://techterms.com/definition/container and https://www.docker.com/resources/what-container); we can, therefore, run different Containers and not have to worry that different component dependencies interfere with each other resulting in obscure system failures. Each container is isolated and runs as a well-defined black-box.

So, to summarize the ramble so far: VQ Conference Manager uses state of the art open source technology (Containers and Kubernetes) to allow us to package software into a solution. The VQ Conference Manager 3 solution contains Elasticsearch and Kibana.

The guys and gals at Elastic pay for all of their brilliance by cleverly separating out the functionality that customers value and making them available as an optional extra; the optional extras used to be called X-Pack and are now called “Elastic Stack”.

As an Elastic OEM partner, you can buy Elastic Stack from VQ to work with your VQ Conference Manager solution.

The following is the list of things it enables that we think are appropriate and useful at the moment:

✔ Reporting

  • The ability to export the data from reports as a csv
  • PDF export of reports (see attached example)

✔ Watch

  • The ability to set threshold values of specific data values within the data and send emails, post messages to Slack or inject data/messages into Elastic

✔ Elastic SQL

  • Run SQL queries against Elasticsearch

✔ From Elastic 6.5 (VQ Conference Manager 3.1 will run at Elastic 6.4)

  • Cross-Cluster replication (beta). This will become a really powerful tool – it’ll allow, for example, data from one Elastic Cluster to be replicated to another. Usage scenarios include backing up data or have dedicated “analysis” hosts with extra capacity.
  • As VQ Conference Manager 3 based systems become bigger, this will become invaluable.

✔ Other features enabled include:

  • Graph – the ability to establish relationships between data (example use cases include fraud detection and malicious system access)
  • Machine Learning – a whole bunch of coolness
  • Canvas – a next-generation visualization tool

What is striking about Elastic and Kibana is the velocity of innovation; new releases are frequent and rate of great new functionality being added is amazing.

We are very pleased to have added Elastic and Kibana into the VQ Conference Manager solution.

The process of buying Elastic Stack is straightforward; talk to VQ sales and raise a PO. We’ll issue you a license key which you upload into Elastic; job done.

Mike Horsley

VQ Conference Manager status update (Nov 2018)

We’ve been busy here at VQ Towers working on some pretty cool things…..

  1. The initial VQCM 3.0 version shipped in June and we released 3.0.2 early October for use on production services. Adoption has been excellent and 3.0.2 is working really well in the field.
  2. Work on 3.1 is progressing well; we’re in the process of wrapping up development and focusing now on testing. The big change in 3.1 is Single Sign On (“SSO”). This is looking really good and provides Windows Authentication, SAML 2 and conventional AD/LDAP authentication. There are other changes to Analytics 2 which we think you’ll love but we’ll give more details of that closer to release. VQCM 3.1 is targeted to Beta around the end of November and release early in the new year.
  3. We’ve added some really awesome new functionality to enable Random URI and Call Ids to be created; it’s now possible to define Space Templates and automatically generate the URIs and Call Ids when new Calls or Spaces are created. Look out for this in 2.4.1 (due late November) and 3.1.
  4. We’ve added secondary Call Ids; this has made at least one customer very happy and enabled their users to join calls using ‘short dials’. Look out for this in 2.4.1 and 3.1
  5. Cisco Certification testing; tick. VQCM 3.0 has been through the Cisco certification process and passed with flying colors.
  6.  We are now a Preferred Cisco Partner. How cool is that?

Mike Horsley (CEO)