Network Systems Design
Remote Access and Security
These are discussion questions. Do not use any thing from a Grantham University student.
Please respond to both of the following questions:
Why are remote access capabilities a necessity in today’s computing environment and how can an organization leverage these capabilities for greater productivity without creating a security risk?
Why is it important to apply security methods to both physical and wireless communications and which medium is easier to secure against an attack?
Some course material below
IS696 Week 4 – De-Complicating Network Security
Network security does not need to be complicated. The current methods that are commonly deployed on networks around the world today are a complicated mess and are generally ineffective at keeping hackers from successfully penetrating into a network. There are simpler and more effective ways of keeping a network secure if organizations are willing to take the appropriate measures. But these measures are typically ignored because they are seen as drastic and they will limit users’ ability to access some data from work or participate in certain forms of communication on the internet. Unfortunately, too much risk is being accepted in the place of applying realistic security measures. This week’s lecture will focus on how we can de-complicate network security
It is time for an honest discussion about the changes that need to be made. First, the concept of white listing needs to move beyond a concept and into a reality. Second, intrusion detection systems, or IDS, should be deployed throughout a network and not just on the perimeter. Third, demilitarized zones should include every server that provides external services and no external connections should be allowed past this point. And fourth, internal portions of the network should be segmented so users are only able to access systems within their authorized segmented portion. No doubt, these four methods are drastic but the next few moments will reveal that each method will bring a new level of robustness to the security of any network. Hackers continue to have the upper hand on the internet because senior level managers and executives refuse to make the hard decisions based on the ridiculous notion that the users’ capabilities will be hampered to the point of destroying productivity. The reality is this, piling more solutions on top of old ones is not an effective strategy in the physical security realm so why is it allowed in the digital realm? The clutter and complication would be more evident in the physical world because everyone would be able to see the over complication in place but this is easily hidden on the digital side. Drastic times call for drastic measures and it is now well past time to take network security seriously and apply the appropriate level of protection to keep systems safe. Corporations can no longer hide behind the convenience excuse if they want to keep their information secure.
Today’s method of blocking IP addresses that have previously done harm and may try to do harm in the future is a good notion but the list of nefarious IP addresses quickly grows out of control and becomes difficult if not impossible to manage. This method is known as blacklisting as it allows all traffic by default and denies certain traffic by exception, effectively blacklisting the IP addresses an administrator does not want to have any connections. The process is simple enough; whenever someone takes potentially harmful actions against the network by scanning ports, running vulnerability scanners, or attempting to hack in, the router administrator can add a deny statement in the Access Control List that blocks any future attacks from the user. And over time the list grows into a cluttered unmanageable mess. This is an effective model for smaller networks that have a low number of external connections but it is not a good model for every network because as network traffic increases so do the amount of attacks. And as the attacks increase the administrators will have to keep adding more and more IP addresses into the access control list eventually creating an unmanageable list that can only be fixed by starting over from scratch. Most routers can only handle a limited number of entries on the access control list and longer lists slow routers down because every incoming packet must be checked against every entry before being allowed to pass. Memory and CPU capabilities are finite so a growing access control list will eventually slow things down considerably. So on a network that receives only a handful of connections blacklisting will be perfectly fine but for larger networks the deny by exception method should be turned on its head and replaced with the more robust deny by default and allow by exception, otherwise known as whitelisting. This is not a new concept and is used sparsely across the planet but if it was used more often it could prove to be an effective weapon against hackers.
Every organization needs the ability to connect to someone or something on the internet; that is the reality of conducting business in the 21st century. But no one has the need to connect to everyone; and that is a greater reality when it comes to network security. Grasping that reality and implementing whitelisting could quite simply eliminate most hacking situations from the perspective that most connections will never be allowed to pass through the router. Whitelisting is similar to blacklisting in that is implemented by creating ACL entries, except whitelisting adds entries for authorized traffic and everything else is automatically denied. If a hacker can’t connect to the network than they can’t break into the network. That’s the same idea behind blacklisting but with this method no one has to worry about where the bad guys are located. They only have to allow communication to authorized locations and everyone else is automatically blocked. Whitelisting also has the added benefit of decreasing costs because invariably less bandwidth will be used on a regular basis. Employees won’t be allowed to surf to any random site during the day if it isn’t whitelisted so they will be forced to be more productive in office. Certainly there is a balance that needs to be struck between effectiveness and morale and that balance can easily be maintained by allowing access to common sites that are known to be safe but the entire internet does not need to be available as a playground. Organizations that want to implement whitelisting can monitor network traffic for a few weeks to determine where most connections are made and then use that list as a foundational starting point. Adjustments can then be made on a regular basis until that balance is achieved and then future additions would need to be formally requested. Of course some systems, such as web servers, will need to be excluded from whitelisting and should be placed in a separate network segment.
Demilitarized zones or DMZ’s are a segmented portion of the network where public facing servers are located to keep the general public from accessing internal network devices. The concept is simple enough, place all of the externally accessible servers off of a separate port on the firewall and don’t allow any traffic to flow between that port and the ports that connect to the internal portion of the network. This allows external users to access services within the DMZ but restricts them from making any legitimate connections further into the network. As with blacklisting, the DMZ concept is good one. It reduces the risk to the more sensitive portions of the network by moving highly accessed public servers into a separate network segment with the intention of physically separating them from the rest of the network. There will always be some systems on the network that need to be publicly accessible. Web servers are the face of any company today so they need to be accessible to anyone. External email servers fall into the same category and need to communicate with other email servers to keep traffic flowing. DNS servers are also needed to keep all the connections going to the correct IP address. These types of systems cannot be protected through whitelisting because there is no way to determine who needs to access them on a regular basis. Unfortunately, these are also some of the most commonly targeted systems by hackers so it creates the difficult game of cat and mouse trying to keep the systems secure from the latest vulnerabilities. The problem with the DMZ concept is that too many organizations have complicated it by having servers sitting on the private side of the network that are being accessed from within the DMZ. This is commonly seen when web servers have to pull data from a database server. The user connects to the website and conducts a transaction and then the web server accesses the database to store or retrieve any new information. And this connection gives hackers the potential to reach through the firewall and gain access to the internal portion of the network. It is absolutely critical that all publicly accessible or pseudo-accessible systems not be located within the private area of the network. The DMZ is only as effective as how it is managed and when public systems are located elsewhere on the network it defeats the purpose.
A more robust DMZ architecture should be implemented to alleviate these problems. First, the DMZ should not just be a separate segment of the network; it should be a separate network entirely. If a hacker gains access to a system within a DMZ and there are secondary connections that reach into the internal network than they will also be able to manipulate those connections to obtain a deeper level of penetration. Placing the DMZ on a completely separate network and including all necessary systems on that network is the best way to eliminate this risk. Firewall and router configurations do not provide enough protection in between these segments because once the hacker is in the DMZ they can scan and continue just like in any other operation. By physically separating the two, the hacker is placed in a proverbial sandbox with nowhere else to go. The second step that needs to be taken is to vigorously ensure patch levels, antivirus signatures, and any other security software setting all always up to date. DMZ systems are arguably the most important systems to protect from the perspective that they are the most vulnerable. Keeping them well locked down and monitoring their connections very closely is the only thing a security professional can do with public facing systems. DMZ’s are not the only place on a network where connections need to be restricted. Often times there are sections on an internal network that need a greater degree of protection and a similar segmentation concept can be applied within the internal portion of the network to limit or eliminate some of those connections. Networks are designed to allow users a greater flexibility in communicating with each other but most users on corporate networks do not have a need to access every system attached to the network or every file that is contained therein. This is another one of the great shortfalls in network security. Most users on the internal portion of the network have some level of access to all of the available resources and even though some of these are protected with application level security most are easily accessible with a small amount of knowledge. The reality is that most networks protect the external boundary points but provide very little protection on the internal segments. This can prove to be disastrous if a hacker is able to penetrate past the external defenses because they will then have free reign of the network. But applying the ideas in this lecture can provide a less complicated way to protect these systems.
Implementing these two simple methods can provide more security than most networks currently have today. Yeas, it cuts off a good degree of traffic, but that is the key point. The less traffic allowed, the less chance of a successful penetration, and the less chance of spending money on a cleanup operation. Network security doesn’t have to be complicated, it just has to be logical.
De-complicating Network Security
Network security does not need to be complicated. The current methods that are commonly deployed on networks around the world today are a complicated mess and are generally ineffective at keeping hackers from successfully penetrating into a network. This lecture will focus on how we can de-complicate network security and take a more streamlined approach to securing networks. This is intended to be a contrast to traditional defense in-depth methods to show that some simpler methods can be used to establish a more solid security posture. Even though defense in-depth has been proven as effective, it is also expensive to fully deploy and cumbersome to manage all the pieces of the puzzle. There are simpler and more effective ways of keeping a network secure if organizations are willing to take the appropriate measures. These measures are typically ignored because they are seen as drastic, and they will limit users’ ability to access some data from work or participate in certain forms of communication on the Internet. Unfortunately, too much risk is being accepted in the place of applying realistic security measures. It is time for an honest discussion about the changes that need to be made. Considering and implementing the simple methods in this lecture can provide more security than most networks currently have today. A network of any size can implement these methods much easier than current strategies. Yes, it cuts off a good degree of traffic, but that is the key point. The less traffic allowed, the less chance of a successful penetration and the less chance of spending money on a cleanup operation. Network security doesn’t have to be complicated, it just has to be logical.
For more information, please read the following articles:
- Wu, H., Yein, A., Hsieh, W. (2015). Message authentication mechanism and privacy protection in the context of vehicular ad hoc networks. Mathematical Problems in Engineering, 1-11. 11p.
- Ajmi, A. (2015). Hacked! Lessons learned from an URL injection. Computers in Libraries. 35(5), 8-11.