Sunday, August 17, 2014

Trustwave LP appliances behind the scenes

As mentioned in a previous post, Trustwave has 5 lines of LP devices ranging from LP1..LP5. The appliances contain all 3 tiers of the system:


  • The user interface is Java + Flash + PHP website. It does not look bad at all and quite intuitive. It mainly provides event search, dashboard creation, report building and system configuration capabilities.
  • The business logic is implemented by J2EE applications running on Tomcat. 
  • The back end database is running on MySQL. 

The appliances run Syslog-ng to receive the log messages and save them to a specific "Inbox" directory depending on the filter for each data source "Device". Couple of Java processes namely "DA", "DL" and "DLA" parse the logs using regex, upload them to the database and archive them respectively. Another process "RG" check the logs against the notification criteria configured by the user. 

Another way of forwarding the logs is using FTP and SCP in the latest updates. The appliances run PureFTP server where each user name is linked to an "Inbox" directory. In addition, the appliances support multiple proprietary protocols for different vendors such as Checkpoint LEA, Cisco SDEE,....

The raw logs can be forwarded or "upstream" to other Trustwave appliances or other syslog servers. The parsed logs can be forwarded only to Trustwave products. This is the main function of the log aggregators (LA) family of products. 

All notification filters are checked every 30 minutes with the exception of the silent device alert which is triggered every 10 minutes for data sources that stop sending logs for some reason. Notifications can be sent via SMTP or as SNMP traps.

Report schedules are quite flexible. They can be configured to run daily, weekly or on specific date for reports on logs in a specific period. The reports can be emailed or saved locally and they can be exported in .PDF, .CSV, .XLS and other formats. 

A very useful, yet overpriced feature is the high availability where two appliances form a shared nothing cluster. it is built on top of Pacemaker resource manager available on Linux. The storage is replicated by means of the distributed replicated storage system DRBD. The HA setup allows replication of the database containing logs and the raw and parsed log files. A virtual IP address is one of the cluster resources that is owned by the active node so that it would process the traffic sent to the cluster address. 


No comments:

Post a Comment