Microsoft patch snuffs out major worm potential

Microsoft today released four patches as part of its regularly scheduled patch cycle, including a critical fix to a flaw that could allow attackers to launch a dangerous worm.

This month’s patches affects all versions, including Windows 7 and Windows Server 2008 R2, with two patches rated important and one rated moderate. All three patches require a restart.

The update labeled MS11-083 fixes a problem with the TCP/IP stack in Windows, or what Microsoft describes as “an externally found reference counter issue in TCP/IP stack.” The good news is that exploiting this vulnerability isn’t easy.
“Since this vulnerability does not require any user interaction or authentication, all Windows machines, workstations and servers that are on the Internet can be freely attacked. The mitigating element here is that the attack is complicated to execute,” says Amol Sarwate, manager of vulnerability labs for patch management vendor Qualys. “But otherwise this has all the required markings for a big worm.”

Essentially, the attack involves sending a large number of UDP packets to an unprotected port. When the system is deluged with network packets, the reference counter in the stack will keep incrementing and eventually wrap around. At that point, the system could crash, or if the attacker has planted other malware, the hacker could own the system.

Notes Joshua Talbot, security intelligence manager, Symantec Security Response: “We estimate an attack attempting to leverage it would take a considerable amount of time; perhaps four to five hours to complete a single attack. However, if an attacker can pull it off the result would be a complete system crash or compromise if the attacker develops a reliable means of exploitation.”

Among the important patches is one that fixes a DLL preloading vulnerability in Windows Mail (MS11-085). This class of attack has been around since August 2010, Sarwate says.

“The vulnerability could allow remote code execution if a user opens a legitimate file (such as an .eml or .wcinv file) that is located in the same network directory as a specially crafted dynamic link library (DLL) file. Then, while opening the legitimate file, Windows Mail or Windows Meeting Space could attempt to load the DLL file and execute any code it contained,” Microsoft says.

Microsoft has also fixed another vulnerability in Active Directory, Active Directory Application Mode (ADAM) and Active Directory Lightweight Directory Service (AD LDS) via MS11-086. It could allow elevation of privileges “if Active Directory is configured to use LDAP over SSL (LDAPS) and an attacker acquires a revoked certificate that is associated with a valid domain account and then uses that revoked certificate to authenticate to the Active Directory domain,” Microsoft says. However, Active Directory is not configured to use LDAP over SSL by default.

Although these two patches are only rated as important, Microsoft says that it is likely that exploit code is available in the wild, or will be soon.
The final patch, MS11-084, rated moderate, fixes a hole in Windows Kernel Mode Drivers. If executed, it could lead to a denial of service “if a user opens a specially crafted TrueType font file as an email attachment or navigates to a network share or WebDAV location” with the evil TrueType font file, Microsoft says.

A patch for the zero-day vulnerability used by the Duqu installer did not arrive, nor was it expected. Last week, Microsoft released a manual fix that IT administrators can execute themselves. Symantec’s Talbot believes that Microsoft may not wait until a routine Patch Tuesday and will release an out-of-band fix for Duqu when it is ready.

Death to Firewalls/ Long Live Firewalls

Firewalls have been slowly changing over the years as their network
architectures have been evolving. Firewalls are becoming more decentralized and
becoming increasingly virtualized. As firewalls move from solely located at the
perimeter inward toward the servers, many other changes are taking place. The
pendulum of centralized versus distributed systems continues to swing back and
forth as the industry finds the optimal equilibrium for security

One of the first books I read on the subject of firewalls was “Building Internet Firewalls” by Elizabeth D.
Zwicky, Simon Cooper, D. Brent Chapman. This book covered the topics of least
privilege, defense in depth, choke point, weakest link, fail-safe stance,
universal participation and diversity of defense. The concept of the choke point
helped organizations focus their attention on defining a security perimeter and
placing the firewalls at that single point of entry. At the time most
organizations had a single perimeter and many organizations could only afford a
single firewall at their Internet connection.

It is clear that firewalls have changed over the years. Many firewalls lack
policy granularity and many organization’s firewalls end up having lots of NAT
and policy rules. Some say that firewalls do not impede the bad traffic, they
just impede the good traffic. Most attacks take place at the application layer
over TCP port 80 anyway. Stateful firewalls are only seeing one aspect of the
security picture by looking at the packet header. We need firewalls to perform
more content filtering and deep packet inspection. Unified Threat Mitigation
(UTM) firewalls evolved as we expected more functionality at the single choke
point. We now rely more on DPI/IPS, behavioral analysis, anomaly detection, Data
Loss Prevention (DLP) and Web Application Firewalls (WAFs) to protect our
critical systems. A firewall can define a network perimeter but they can’t
protect against the insider/malware threat. Since 1997 I thought the end of the
firewall era was right around the corner. In recent years we have seen the
“erosion of the security perimeter” and our firewalls have turned into Swiss cheese. Because of all these trends, the
firewall as a concept has slowly died or had its role in the security
architecture diminished.

The other day I was joking with someone who was complaining about their slow
computer and I flippantly suggested turning off their antivirus software. AV
software can put a strain on computer resources and running without it sure does
speed up the function of a computer. However, you wouldn’t think of running a
critical computer without AV software. Running a network without a firewall can
make the transmission of data very fast. Yet, none of us would ever consider
running an Internet-connected network without a firewall.

Years ago, firewalls used to be confined to the Internet perimeter to create
that choke point. Now organizations use firewalls at multiple perimeters and
internally. As businesses started to do more with firewalls and segment their
environments to create separate “enclaves”, “zones” or “compartments” they moved
the firewalls to the core. There are challenges with using firewalls on the
interior of your network. The rule-sets get large or they get less granular to
make it manageable. Policies in these types of firewalls tend to have subnets as
their minimum level of granularity of the source or destination address. In the
end, these firewalls only delay the legitimate internal traffic and do not
necessarily keep out the bad guys. If you make the assumption that the bad guys
are already in the internal portion of your network you are probably on the
right track.

There are several reasons why firewalls are non optimal As the policy size
increased so do the demands on the CPU and memory resources of the firewall. We
also expected more logging from our firewalls because we want to send that data
to our Security Information Management Systems (SIMS). This logging further
drives down firewall performance. In the early years of firewalls we had a hard
time implementing firewalls that provided redundancy and sufficient performance.
As the edge bandwidth increased, the firewalls needed increasingly higher
interface speeds. Now we have firewalls with 10 Gigabit Ethernet interfaces.
This makes for an expensive firewall that has bandwidth and CPU resources to
keep up with that amount of traffic. We are basically turning our firewalls into
slow routers.

There is a distinct trend in the industry to move the stateful firewalling
closer to the servers within an IT environment. With server virtualization and
server consolidation we can have virtual servers with different trust levels on
the same physical server. With perimeter or core firewalls, now the firewall is
not close enough to the server. Having a firewall close to the server provides
maximum security for each server and allows servers to communicate with other
servers of diverse trust levels only through a stateful firewall. This technique
of firewalling at the hypervisor/server virtualization layer, prevents
unacceptable server-to-server communications.

The following diagram shows how there can be many virtual computers running
within one physical computer. Each may have a different level of trust or
classification of data it handles. Therefore, having stateful packet filtering
within the virtual environment is required to maintain separation and security.
Using stateful packet filtering at this level of the architecture may also be a
requirement of meeting security compliance standards.

Virtual Firewall

The current trend is moving more toward host-based firewalls. Although there
may be organizations out there that are using iptables/ip6tables on their
virtualized firewalls, there are many other organizations looking to use a more
sophisticated firewall at the hypervisor layer. Now there are a variety of
companies offering these types of virtualized firewall products in this new
Cisco Nexus 1000V Virtual Security Gateway
(Virtual ASA)
Juniper Networks vGW Virtual Gateway (formerly
Altor Networks)
Check Point Security Gateway Virtual Edition

5nine Virtual Firewall for Hyper-V
Reflex Systems vTrust Security

VMware has a VMSafe Partner Program to “approve” those vendors
who have solutions that work with VMware. Earlier this year Ellen Messmer wrote
a good article on security in virtualized environments titled “Battle looms over securing virtualized

The other issue that comes to light early on in an organization’s
consideration of using a virtual firewall system is management. Who maintains
the virtualized firewall? Does the responsibility for configuration and
management of the virtual firewall fall on the network team, the security team
or the system administrators? This is becoming a larger issue as more appliances
are also moving to the virtualization layer. It may be easy to predict that
Server Load Balancers (SLB) and Application Delivery Controller (ADC) systems
will become virtualized and being implemented in the hypervisor layer. As
systems become increasingly more virtualized the traditional lines of physical
demarcation are bluring.

This is the time of the year for horror movies. One of my favorite actors
when I was growing up was Vincent Price and I liked the movie “Pit and
the Pendulum
“. This reminds me of how trends swing back and forth like a
pendulum. Whether it is bell-bottom-jeans or how IT systems move from
centralized to distributed and back again, the pendulum is always in motion.
Many years ago there were mainframe computers with centralized computing. Over
the 1980s to 2000s we had distributed our computing resources and made them
geographically diverse. This might have supported our Disaster Recovery (DR)
goals, but it made it difficult to manage such a distributed environment.
However the pendulum has swung the other way as companies created centralized
server farms, consolidated data centers, performed server consolidation. The
pendulum has moved from mainframes to distributed servers and now we are moving
back toward larger physical servers with virtualized operating systems. This
sounds remarkably like timesharing on a mainframe. As far as firewall
architectures is concerned, it appears that the pendulum has swung from using
centralized firewalls at the perimeter to using distributed firewalls at other
areas as the picture below illustrates.

Firewall Pendulum

We are also witnessing the pendulum swing in the core routing/switching
realm. We have had a widely distributed set of routers performing distributed
packet forwarding with a distributed control plane for almost 20 years. Routers
have distributed intelligence and use routing protocols to share reachability
information. Each router operates autonomously. Now we may be moving back toward
a centralized control plane with technologies like OpenFlow. OpenFlow is centralizing the control plan but
leaves the forwarding and data planes distributed across the network

Firewall architectures have changed over the past 20 years. Firewalls have
moved from the perimeter to the network core, toward the server edge and to the
virtualization layer. We have seen computers move from centralized to
distributed and back again with server consolidation and virtualization. These
pendulums will continue to swing over the years until the industry matures and
discovers the best equilibrium to support our businesses with the least cost. As
Heraclitus said, “nothing endures but change.”