Some test text!
WebViewer Server manages its own work queues and caches a lot of intermediate data opportunistically locally, both in RAM and on disk. As such, having access to more system resources will allow the Server to operate more responsively and efficiently (using cached data improves response time, and also conserves CPU resources).
If your use case calls for multiple backend nodes, then a smaller number of more capable nodes is a better choice than a large umber of smaller nodes -- a 4 core/8GB server will have a higher peak user capacity than two 2 core/4GB servers.
In order to maintain efficient operation, WebViewer server requires access to at least 2 CPU cores, at least 2GB of RAM, and 50GB of storage space. Anything less than 2 cores, and internal work queues will start to behave serially, which will drastically raise server response times.
Access to insufficient RAM limits the amount of data that can be held in the short-term cache, and it also limits the ability of the server to process particularly difficult documents.
If there is insufficient storage space, then the server will be unable to generate data without first pushing out existing cached data that is still in use by clients.
The WebViewer server comes with a self-signed certificate, usable for SSL debugging purposes only.
In order to have SSL work correctly on your own domain you must provide a certificate chain file. This certificate chain file should:
Once the key is prepared you should:
haproxy/directory in the root directory.
SSL_CHAINvariable to the name of the certificate chain file in the
The container (along with webviewer) now has built-in support for using multiple backends behind a load balancer.
As the container is not entirely stateless, the balancer needs to fulfill a few requirements:
There is a sample configuration included in the download archive which demonstrates a fully working load balancer setup. Running
docker-compose -f docker-compose_load_balance.yml up will launch a container composed of two WebViewer Server nodes with an HAProxy load balancer front end.
In the sample setup, incoming connections are directed to the least occupied backend node, and will remain attached to that node for the remainder of the session, or until the node starts to become overloaded.
If there are no available healthy nodes, then WebViewer will attempt to continue in client-only rendering mode.
WebViewer Server does not handle authorization. If authentication is required for a file server, WebViewer Server needs to be passed it on a per request basis. We offer several options for passing authentication data to the WebViewer Server so it can fetch documents that require authorization.
In the WebViewer loadDocument call you are able to specify custom headers - these can contain things such as authorization tokens. When the WebViewer Server requests the URL specified in loadDocument, it will append these customHeaders.
WebViewer accepts signed links as an authorization method - the server will use these same links to successfully fetch files.
You may pass session cookies. This can be enabled as specified here, but only works when WebViewer Server and the file server in question share a domain.
In addition we have several options that allow users to better control the security of the WebViewer Server:
WebViewer Server was designed to work alongside the WebViewer client. Document requests are made through WebViewer to the server, the server then fetches the document requested , renders it, and returns the completed document links to WebViewer. The WebViewer client then fetches these documents directly from the server's
/data directory. The diagram below depicts this process.
In addition, when WebViewer is working in conjunction with WebViewer Server, it will choose to use the fonts from the server instead of our publically hosted fonts for WebViewer.
Outside of the file server and the WebViewer client, WebViewer Server has no interactions with other systems.
A distributed environment constitutes something such as
Kubernetes and the
AWS Elastic Container Service. In these environments you may have more than one copy of the server running at once.
In a distributed environment WebViewer Server requires stability with connecting users. This is because the WebViewer Server container has a stateful cache. Whenever a user interacts with the server for a document conversation, they must continue communicating with the server they started the communication with until they request a new document. At this point, the user may be redirected to another server.
WebViewer Server achieves this in the AWS Auto-Scaling template with a HAProxy container that comes as part of the compose file. It manages user stickiness for each server, until their currently used server forces a reset of their stickiness cookie, which would occur when a new document is requested. The
HAProxy configuration code here depicts how we handle the cookie settings. This can be found in
haproxy/haproxy.cfg of our WebViewer Server package.
# balance mode, fill the available servers equally balance leastconn # haproxy will either use this cookie to select a backend, or will set it once one is chosen # preserve means that it will leave it alone if the server sets it cookie haproxid nocache insert preserve # a server is healthy as long as /blackbox/health returns a 2xx or 3xx response option httpchk get /blackbox/health http-check expect rstatus (2|3)[0-9][0-9] # keep sessions stuck, even to "unhealthy" servers, until the connection fails once option persist option redispatch 1
You may also run any sticky session solution you want with WebViewer Server, as long as it maintains a session with the server for the duration of the a document or client connection.
You likely require a health check for your distributed environment. We offer one on your running WebViewer Server at
http://your-address/blackbox/HealthCheck. You can learn more about it in our usage section.
Get the answers you need: Support