Nowadays servers are still being the main target of attacks detected by networking based IPS, on almost a 2 to 1 ratio. Considering the weaknesses of the client systems mentioned on part I of this post, we should be asking: why to attack servers knowing that this kind of equipments are most probably patched and put under surveillance? This is the same reason thieves steal banks… because it is there where the money is. Application servers are connected to the network and sometimes to internet into a Demilitarized Zone (DMZ), and automated attacks are suitable for servers because they exploit vulnerabilities of services or applications without final user interaction. It is possible to scan ports and services of servers from the outside of the network or a compromised internal client, and after, to be attacked with specific techniques for each application version or operating systems being executed. So, there are many remote attacks that, in case of being successful, will give the attacker remote control of the system.


The main attack vectors are most oriented to execution of remote code (ERC), where the three principal by incidence are the code execution, memory errors and buffer overflow. Even denial of service attacks (DoS) can be helpful to attack a server working as a smokescreen to distract the server of the attack while it’s happening. When the smoke has been dispersed, the attack is now complete and the target server has been compromised.


Clients also represent clear targets, especially for network based attacks focused on spreading along an internal network or an unprotected public network. Besides missing patches and service packs that fix known services, clients are often vulnerable due to disablement of important protection features. For example, almost a quarter of the endpoints in an enterprise don’t have the desktop security system enabled, and more than 50% have bluetooth active, thus they are exposed to wireless attacks in public environments.



Not only the organizations are struggling against BOT infestations, also BOTS are even more active. Communications between BOTS and C&C servers (Control and Command) increase as time goes by; statistics indicate that a BOT is trying to communicate with its C&C server every 3 minutes. Each of these communication attempts is a chance for the BOT to receive instructions and potentially filter private data outside the affected organization. This increase of C&C communication represents a serious threat to organizations that are struggling to protect the integrity of their systems and data. According to security vendors like Checkpoint, around 77% of BOTS are active for more than 4 weeks.



An effective way in which organizations could handle this malware acceleration activity and fight against the accelerated rate of attacks, infections and filtering in their environments, is to automate and coordinate multiple defense layers. Essential measures include:

  • Gateway and host antivirus with URL filtering: Organizations must be able to detect and block malware as well as any attempt to connect to a known malware site.
  • Anti-bot gateways: Besides detecting malware, this kind of solutions should have the intelligence to mitigate DGA (Domain Generation Algorithm) based attacks.
  • Enhanced IPS: The solution should cover networking systems, servers and infrastructure.
  • Comprehensive maintenance of systems and applications: Patches, fixes and service packs.
  • Best practices for clients and server configuration: Restrict Admin privileges, deactivate Java, as well as any command sequence, and limit the kind of applications that final users are able to install on endpoints.

I hope this has been informative to you, and I’d like to thank you for reading… see ya’ on next post