Nowadays servers are still being the main target of attacks detected by networking based IPS, on almost a 2 to 1 ratio. Considering the weaknesses of the client systems mentioned on part I of this post, we should be asking: why to attack servers knowing that this kind of equipments are most probably patched and put under surveillance? This is the same reason thieves steal banks… because it is there where the money is. Application servers are connected to the network and sometimes to internet into a Demilitarized Zone (DMZ), and automated attacks are suitable for servers because they exploit vulnerabilities of services or applications without final user interaction. It is possible to scan ports and services of servers from the outside of the network or a compromised internal client, and after, to be attacked with specific techniques for each application version or operating systems being executed. So, there are many remote attacks that, in case of being successful, will give the attacker remote control of the system.


The main attack vectors are most oriented to execution of remote code (ERC), where the three principal by incidence are the code execution, memory errors and buffer overflow. Even denial of service attacks (DoS) can be helpful to attack a server working as a smokescreen to distract the server of the attack while it’s happening. When the smoke has been dispersed, the attack is now complete and the target server has been compromised.


Clients also represent clear targets, especially for network based attacks focused on spreading along an internal network or an unprotected public network. Besides missing patches and service packs that fix known services, clients are often vulnerable due to disablement of important protection features. For example, almost a quarter of the endpoints in an enterprise don’t have the desktop security system enabled, and more than 50% have bluetooth active, thus they are exposed to wireless attacks in public environments.



Not only the organizations are struggling against BOT infestations, also BOTS are even more active. Communications between BOTS and C&C servers (Control and Command) increase as time goes by; statistics indicate that a BOT is trying to communicate with its C&C server every 3 minutes. Each of these communication attempts is a chance for the BOT to receive instructions and potentially filter private data outside the affected organization. This increase of C&C communication represents a serious threat to organizations that are struggling to protect the integrity of their systems and data. According to security vendors like Checkpoint, around 77% of BOTS are active for more than 4 weeks.



An effective way in which organizations could handle this malware acceleration activity and fight against the accelerated rate of attacks, infections and filtering in their environments, is to automate and coordinate multiple defense layers. Essential measures include:

  • Gateway and host antivirus with URL filtering: Organizations must be able to detect and block malware as well as any attempt to connect to a known malware site.
  • Anti-bot gateways: Besides detecting malware, this kind of solutions should have the intelligence to mitigate DGA (Domain Generation Algorithm) based attacks.
  • Enhanced IPS: The solution should cover networking systems, servers and infrastructure.
  • Comprehensive maintenance of systems and applications: Patches, fixes and service packs.
  • Best practices for clients and server configuration: Restrict Admin privileges, deactivate Java, as well as any command sequence, and limit the kind of applications that final users are able to install on endpoints.

I hope this has been informative to you, and I’d like to thank you for reading… see ya’ on next post



It’s kind of scary the increase of malware downloads in the enterprise environment. During 2012, almost half (43%) of the organizations had at least one case on which one user downloaded malware in an average of at least one per day, and the remaining 57% experimented one malware download every 2 to 24 hours. In contrast, over 2013 about 58% of organizations reported one incident of a user that downloaded malware every two hours or less.

It’s necessary to have a clear view of the weaknesses of systems and applications, that’s why the defense of vulnerabilities involves two main points:

  • To apply all available patches from vendors for vulnerabilities in order to correct the problem
  • To implement Intrusion Prevention Systems (IPS) to detect, and if wanted, to block all attempts to exploit known vulnerabilities

Despite the arising incentive programs from vendors for detected vulnerabilities, the high market value of the real “day-zero” vulnerabilities is causing the researchers to sell the information to “gray hat” government agencies (those which work with hackers to expand their cybernetic defense capacities) and professional intrusion testing organizations.

An even more lucrative clandestine malware market serves “black hat” hackers… here, the pricing for previously non reported vulnerabilities varies according to the target platform, starting from $5,000 USD for Adobe Reader to $250,000 USD for Apple iOS. The availability of “day-zero” vulnerabilities for buyers, allows any organization to launch advanced cybernetic attacks regardless their technical skills.


According to the CVE (Common Vulnerabilities and Exposures) database, during 2013, Oracle continued on top of the list with more reported vulnerabilities, many of them were found in Java products used widely in server and client applications, that’s why it has been a great opportunity for attackers.



Talking about attacked platforms, Windows is still the winner with about 67% of the organizations. The attacks increment for Adobe (Reader / Flash Player) and VideoLAN (VLC media player) gives us an idea of the attacks destined for final users, while the increment on the attention of infrastructure devices and platforms is evident on the major incidence of attacks over Squid systems (proxy and storage cache on internet), 3COM (switching and routing) and CA (identity and analysis).


  • 84% of organizations have downloaded a malicious file
  • Every 60 seconds a host accesses into a malicious site
  • Every 10 minutes a host downloads malware
  • 33% of the hosts don’t have updated software versions

I hope it’s been informative for you and I’d like to thank you for reading… see you on part two.

Cloud a la Carte


I like to joke that every trouble starts somewhere at executive level… for example a golf field!!! Suddenly your boss comes to the office and this happens…

“You know what?… We are about to launch some new cloud services…” he says.

“Really?… What cloud services?”… you ask.

“Well, I don’t really know… think about something… just get it done”… he replies.


This is the time when you have to figure out what to do and how to do it…

There are a lot of different Cloud Services out there… so I’ll try to illustrate what the different things that people are offering really are. If we assume that more or less everything is a web-based application today, then we have the User Interface which is in the user’s browsers or iPAD Smartphone application or something like that. On the server-side, there is a Web Application which runs on top of a scripting environment, which runs on top of a web server that, by the way, uses a Database. Both, the Web Server and the Database use File System that is in some sort of Block Storage; and also both, Web Server and the Database are running on top of an Operating System that needs some CPU, RAM, Storage, and some networking access.


You can insert different things anywhere in between these lines and you’ll get different cloud services:

  • If you put a service between the user and the Web Application, it is called Software as a Service (SaaS)
  • If you give people an environment in which they can develop their own scalable web applications, it’s usually called Platform as a Service (PaaS) that could also derive in Database as a Service (DBaaS)
  • We can also have two types of Storage as a Service (Storage-aaS): you could either offer File System as a Service or you can offer Block Storage as a Service
  • Finally, the service on which nowadays many of Cloud Providers are most interested on… Infrastructure as a Service (IaaS), which is usually Server Virtualization plus a lot of things around it


What is the difference between traditional Web Hosting and Platform or Infrastructure as a Service?

Traditional Web Hosting was pretty static, so you could only get a web server or a virtual web server and that was it; if you got more demand of services, the web server just fell over and there was the beginning of many scalability issues.

A true complete IaaS solution should be scalable, elastic, location independent and the cloud service should be really on-demand, which means… I can consume it when I need it, no need to sign a contract and pay in advance for 6 months or so, and then get the service in three months.


Key Ingredients

  • Scalability – Scale Out preferred over Scale Up
  • Orchestration – There must be something behind scenes that auto-configures resources like computing, storage and networking. So that, the users can ask for the service and get the service when they need the service, not two days later.
  • Customer-driven deployment



  • Robust networking – Use of standard protocols that allows us to interact with any other networking element with no compatibility issue
  • Lots of east-west traffic – Traffic within the data center (replication distributed file system, multi-tenant architectures)
  • Robust local & global load balancing – Choose load balancers that offer high-level APIs (the more programmable, the better)

It is important to understand how an IaaS solution is built because it seems that most of the cloud offerings are implemented on top of IaaS service by means of server virtualization.






SDN puzzle

I remember a parable about six blind me trying to describe an elephant based only on their own touch sense. Each one of these men, had the chance to express how that “thing” could really be like… one feels the tail and says the elephant is a rope, one feels the trunk and thinks it’s a wall.  Another feels the leg and thinks it is a tree.  Eventually, the men realize they have to discuss what they have learned to get the bigger picture of what an elephant truly is.  As you have likely guessed, there is a lot here that applies to SDN as well.


 Each IT professional creates its own ideas of every technology that passes through his hands or mind, according to the role that the engineer plays in the organization. For example… based on my wider experience on networking stuff, I’m most interested on orchestration, automation and controller-based features of protocols like Openflow to lower the daily operations tasks; but that could be a completely different pic to a DevOps engineer, for whom the focus aspect would be programmability.

However, if you concentrate on the parts of SDN that is most familiar, you’ll probably miss out on the bigger picture. Just as an elephant is more than just a trunk or a tail, SDN is more than the individual pieces.  Programmable interfaces and orchestration hooks are means to an end.  The goal is to take all the pieces and make them into something greater.

The term “software-defined networking” is not as new as many of us would think, but just since few months ago it has been literally a “boom”.

Before we can have a better idea of the SDN concept, we must first examine how typical networks are conformed. Since most networks are based on routing and switching equipments, we could classify each data flow process as one of three different planes:

  • Forwarding Plane – Moves packets from input to output
  • Control Plane – Determines how packets should be forwarded
  • Management Plane – Methods of configuring the control plane (CLI, SNMP, etc.)

For example, if you are planning a weekend trip and want to make it by car, there are some tasks you must do to get to your final destination.  You must get into your car, sit on the driver’s seat (management plane), configure the GPS system to trace the route you might follow (control plane), and finally start the car’s engine and begin driving (forwarding plane). All this operations take place within the same “device” (car), but while you are driving along the way, any other car on the street operates independently based on its “local configuration” (driver skills, car type and features) . Having said that, it’s important to recognize that, although we could have a premium top of line car, with the best engine, wheels, suspension, chassis, sound system and bodywork, the end result of you trip depends on a whole mix of factors: weather, transit density, highways usage, accidents, other drivers’ skills, etc, which could be seen as “independent configurations” within the whole environment. If I could think about the previous story as a controller-based environment, I could imagine a dedicated person seated on a cabin, with a laptop connected to a real time satellite streaming… and this person is simultaneously broadcasting by radio the weather status, traffic density, and location of car crashes, so that, every car driver on every highway could choose the less crowded highways, and the routes with the safest weather so they could arrive to their destinations as soon, safe and happy as possible.


In the simplest possible terms, SDN entails the decoupling of the control plane from the forwarding plane and offloads its functions to a centralized controller. Rather than each node in the network making its own forwarding decisions, a centralized software-based controller (likely running on commodity server hardware) is responsible for instructing subordinate hardware nodes on how to forward traffic. Because the controller effectively maintains the forwarding tables on all nodes across the network, SDN-enabled nodes don’t need to run control protocols among themselves and instead rely upon the controller to make all forwarding decisions for them. The network, as such, is said to be defined by software running on the controller.