From: Giang Nguyen [nguyen59@illinois.edu] Sent: Thursday, April 22, 2010 12:20 PM To: Gupta, Indranil Subject: 525 review 04/22 CS525 review nguyen59 04/22/10 End to end arguments in system design This paper concerns the design of systems that use/provide services from/to lower/higher services. The question is at which layer to provide/implement certain functionalities. The position of this paper is against low-level implementation of functionalities that only the higher application levels can completely and correctly implement. The first example is file transfer application over a communication system. The functionality in question is reliable and correct transfer of the file. In this example, the application itself has to do many checks of the file contents when received that it does not make sense for the communication system to implement reliability/checking of the file data while transferring it. Another example is that even if the communication system provides aknowledgment that the message was delivered to the destination host, it is sometimes not enough to the application; the application sometimes wants to know whether the message was acted on. Another exampl! e ! is even if the communication system encrypts data, the data is still in the clear between the host and the application, and the hosts still have to check for authenticity of the messages. Other cited examples in favor of the end-to-end argument are duplicate message suppression, guaranteeing FIFO message delivery, and transaction management. Pros: - Presents a strong argument with many easy to understand examples. Cons: - Section 3.3 Duplicate message suppression missed a potentially very strong reason against doing it in the network: it would mean the network would have to remember messages sent by many hosts on many connections for X number of seconds/minutes in order to detect a duplicate. That is clearly very hard to do. The argument made in this paper is influential. It is still applicable today. However, there is a recent article (I can't remember exactly what it's called) in ACM that says some individual networks/systems work well in tolerating failures, but when the interact in a bigger system/network, then they become much more failure-prone. From: Ghazale Hosseinabadi [gh.hosseinabadi@gmail.com] Sent: Thursday, April 22, 2010 12:17 PM To: Gupta, Indranil Subject: 525 review 04/22 An End to the Middle Different middleboxes with different purposes are built and implemented. Well-known examples are firewalls, NATs, load balancers, traffic shapers, deep packet intrusion detection, virtual private networks, network monitors, transparent web caches, content delivery networks, … . But the main problem is that these boxes are expensive for being used in small networks. The other main issue is their perimeter-based functionality. This means that a middlebox usually works just at the edge of the network, not throughout the entire network. This paper provides a solution that addresses these issues involved with middleboxes. Their proposed solution is called “End to the Middle” or ETTM. In this solution, a centralized middlebox is replaced with a distributed one with the same functionality. In ETTM, first, trust domains inside the network are detected. Then, a trusted platform module (TPM) detects end hosts that can be trusted. The authors designed the architecture of ETTM end hosts to be able to work with TPM. Physical Switches used in ETTM should be simple and uniform in the features they provide. They also should be capable of the following functionalities: Neighbor Discovery, Switching/Routing, Authentication and Querying. In ETTM, network management (such as policy handles, resource discovery and monitoring, consensus and agreement) is done via distributed serviced. As some example network services, the authors described how NAT can be implemented in a distributed way or how quality of service is guaranteed. Pros: This paper suggests that costly middleboxes that are designed for large networks can be replaced with a distributed solution. In this solution, parts of the network that can be trusted on are first detected and then the objective functionality is assigned to different distributed parts of the network. Cons: Replacing a centralized middlebox with a distributed equivalence makes the system more complex. This complexity might affect the performance of network and makes its efficiency low. This complexity might be in number of messages exchanged, time needed to converge, extra memory used in distributed hosts. All these costs together might be comparable with the cost of buying the middlebox. From: Shivaram V [shivaram.smtp@gmail.com] on behalf of Shivaram Venkataraman [venkata4@illinois.edu] Sent: Thursday, April 22, 2010 12:13 PM To: Gupta, Indranil Subject: 525 review 04/22 Shivaram Venkataraman - 22 April 2010 An End to the Middle This is a position paper which presents, End To The Middle (ETTM), a new research direction for the organization and maintenance of small scale networks. Traditionally networks depend on a large variety of middleboxes like NAT translators, firewalls, proxy servers, intrusion detection servers etc. These middleboxes are expensive, complex to setup and often use proprietary hardware / software. Modern computers have multiple cores and spare CPU cycles to run the networking software stack. The authors propose a new design which uses the Trusted Platform Module to ensure integrity of the end hosts. In this design, every machine has a thin hypervisor (like Xen) and the network management functionality is run in a separate VM on top of the hypervisor. This special VM is called the Attested Execution Environment (AEE). The user's operating system is run in a different VM and all network packets sent / received by the user are passed through the AEE which can enforce policies like filtering, shaping etc. The AEE can also provide some of the functionality which middleboxes provide today. NATs are some of the most widely used middleboxes and they can be replaced by placing logical to physical address mappings in the AEE. Any packet which is routed into the network, can then be forwarded to the appropriate end host. Finally, the authors also envision having simple middleboxes in the network, which can be programed using OpenFlow or OpenWrt. These devices will provide features like Neighbor Discovery, Switching/Routing, Authentication and Querying for statistics. Pros: - A very radical and novel approach to solving the problem of proprietary middleboxes. - Moving some of the network functionality to the end hosts would help adding new policies as the end hosts know which traffic is important. Cons: - Middleboxes with custom hardware and software stacks provide high performance guarantees. It is not clear if the same performance can be achieved using the proposed ETTM scheme. - The design of ETTM is focused on organizations in which all the resources are owned by a single group. It would be interesting to see what modifications would be required for the design to be applicable to wide area networks. Interesting points: - The ability for the network stack running in a separate VM to detect the exact application which sent a packet may be non-trivial. - The question of security and churn can be handled by applying some of the existing research on Sybil-proof DHTs. From: pooja.agarwal.mit@gmail.com on behalf of pooja agarwal [pagarwl@illinois.edu] Sent: Thursday, April 22, 2010 12:03 PM To: Indranil Gupta Subject: 525 review 04/22 DS REVIEW 04/22 By: Pooja Agarwal Paper – An End to the Middle Main Idea: This paper presents ideas over how to reduce the dependence on proprietary middleware hardware used to provide several functionalities on top of internet. In the light of the recent computational and storage advances in the end devices, the authors propose to build software based middleware on top of these end devices to provide better control and cleaner design for network management tools. As a part of the architecture, each end device will support a separate VM for network management and all the traffic goes through this VM called the Attested Execution Environment (AEE). The switches and routers use FIFO technique to forward the traffic and the receiving end device also uses FIFO to retrieve the packets. All sorts of network management tools are dealt as applications which are implemented in the AEE. Pros: 1) The paper presents some key ideas on how the resources available at end hosts be used for network management. The distributed NAT covered in the paper provides an example how certain applications can be modified to work at the end device level. 2) The system tackles the problem of home based small networks in which expensive middleboxes and heavy network management is not required and use of resources at end devices should be sufficient. Cons: 1) The proposed idea of shifting the management towards the end fails to work for network management aspects which require end to end management. For example, priority based end to end scheduling and delivery of packets, end to end traffic shaping, end to end delay and jitter minimizations, and many more. To achieve the aforementioned goals, it becomes very important for all the levels from end devices to routers and switches to follow the same behavior. Hence, the middleboxes in these cases cannot be eliminated. 2) The tradeoff between management overhead added to the end devices and the gains achieved by using the management tools only at the end devices is unclear. 3) It is true that some amount of management can help at the end devices however; it is hard to envision that performing management only at the end devices will help a lot in large networks. With Regards, Pooja From: Fatemeh Saremi [samaneh.saremi@gmail.com] Sent: Thursday, April 22, 2010 11:45 AM To: Gupta, Indranil Subject: 525 review 04/22 Paper 1: END-TO-END ARGUMENTS IN SYSTEM DESIGN The paper presents end-to-end argument that guides placement of functions among modules in a distributed system. It highlights that functions placed at low level of the system might not be of high value, even significantly redundant considering their cost. Though totally induced by the applications requirements, it is very likely that putting functions at high level an end be unavoidable. Therefore it would be questionable whether or not to provide extraordinary functions at low levels at the aim of helping the higher level applications more with their needs. The authors try to reduce the temptations for placing functions at low level significantly more than it is supposed to provide, since (more likely) applications will have to have their own mechanism to insure their correctness. On the other hand, placing the functions also at low level (significantly) improves performance and a careful application-based trade-off is needed thereof to decide where to embed them: the middle or the end. The authors provide a wide range of examples in different applications to explain and support their argument. The paper addresses a fundamental design decision which is to be dealt with in many applications like in data communication systems (e.g. encryption, duplicate message detection, message sequencing, guaranteed message delivery, detecting host crashes, delivery receipts, etc.), computer operating system and others. A sort of dual line of reasoning can be applied to decide about thin and thick layering (such as in cloud computing). However, the ends are not easy to identify, even it is not always the case to distinguish the ends statically, that is in a specific application and for a particular property, it might be preferred to change the notion of ends from time to time, place to place, etc. in order to achieve better performance. Therefore, the notion of an end does not depend solely on the applications design objective and the particular property; but rather on the environment as well. From: Shehla Saleem [shehla.saleem@gmail.com] Sent: Thursday, April 22, 2010 11:25 AM To: Gupta, Indranil Subject: 525 review 04/22 END-TO-END ARGUMENTS IN SYSTEM DESIGN This is a seminal, highly cited and argued upon but still revered paper. It makes an argument against placing functions at low levels of the system remarking that the cost of providing them will outweigh the returns. The main idea behind making this argument lies in considering how different functions are performed. Even if functionality like error detection etc are done at low levels, the applications or ends would still have to perform them. Therefore, there might be redundancy of functionality. A readily available performance benefit of end-to-end implementations is that the amount of processing required to be done at the network is greatly reduced. This results in performance gains when the network is bottlenecked in terms of processing. Also, with the end-to-end design, functions may only be performed once at the ends rather than the network performing them at for example each hop or in each domain. Another benefit is simplicity and flexibility. Networks that follow the end-to-end argument are easier to design, manage and modify. Hence it may be easier for them to adapt to new technologies. Another major concern comes with the cost of implementing functionality at lower layers. If certain guarantees are provided at lower layers, all the applications have to bear the costs of that even if they want those guarantees or not. The kind of knowledge that lower layers have will not be sufficient when it comes to providing application specific guarantees. An application is the entity who can streamline what metrics are important to it. And different applications may have widely different constraints. For example, a VoIP application may require that packets may encounter occasional losses but the delays must be upper bounded. If a lower layer tries to provide protection/recovery from packet losses and waits for packet retransmissions, by the time application layer gets the packet, the delay might be so much that it renders the packet useless. Now consider financial transactions and data transfers: Although minimization of delays is important to these applications as well, but it is not ‘critical’. Packet losses on the other hand are critical and hence they must be protected against. The simple examples mentioned above demonstrate how applications can better decide what performance objectives to work for. Therefore, what seems appropriate is that lower layers only provide a ‘best-effort’ kind of performance and leave further optimizations to the higher layers a.k.a the ends. This principle also lies behind the idea commonly known as smart edge-dumb core design of the internet. However, the definition of ‘ends’ becomes blurred now that firewalls, NATs, caches etc have become common. So do ‘ends’ mean users, or applications or transport layer? The paper considers it to be users in some cases and transport layer in others. But there may be cases, when users or applications implement true end-to-end checks for example an application may implement a check over its internal system to look for errors while writing to disk and for such applications, transport layer checking might be redundant. Security can be another example. Should security be left to the network to ensure? In that case first the application would have to build a trust relationship with the network which may be both costly and imperfect. Finally, consider routing. Source routing has long been known to the research community and even though this paper advocates it, it is still not very common to routing as we know it today. We consider the network to be responsible for routing. This has several advantages. The main advantage being that the network has far more knowledge about the status of routes etc and upon need, it can adapt to link failures etc more quickly. This can be extended to congestion control as well. The paper advocates end-to-end congestion control. However, this is a fiercely argued upon issue. The ends may indeed be the cause of congestion but it occurs at the network and so the network has better knowledge of where and when it occurs. TCP-like congestion control can only detect congestion after it has occurred. Networks can give a hint of it with schemes like RED, but that may still lead to dropping of packets which may have come a long way already and may have consumed plenty of network resources. Finally, considering the current state of affairs, it might not be reasonable to assume that the ends would be TCP-friendly and would abide by the requirements and help improve congestion. Finally, TCP-like congestion control fails badly in the wireless scenario where packet losses often do not happen as a result of congestion but may well be because of fading and hostile channel conditions. Despite all the arguments against the end-to-end argument, the paper remains a very popular one and considering the time it was written, it is quite far-sighted as well. From: Jayanta Mukherjee [mukherj4@illinois.edu] Sent: Thursday, April 22, 2010 10:53 AM To: Gupta, Indranil Cc: indy@cs.uiuc.edu Subject: 525 review 04/22 The Middle or the End? Jayanta Mukherjee NetID: mukherj4 An End to the Middle: by Dixon, C. et. al, The authors presented a network architecture which leverages existing resources—namely, end hosts—to provide a network that just works based on shifting management toward the edge. The authors proposed a shift from using proprietary middlebox hardware as the dominant tool for managing networks toward using open software running on end hosts.They showed that functionality that seemingly must be in the network. The authors call their approach “End to the Middle” or ETTM. instead providing a set of simple primitives which we can control from the edge in a similar way to how an operating system controls hardware via the hardware abstraction layer. ETTM has been used to manage the resources of a single organization. Pros: 1. The proposed system can be more cheaply, flexibly, and securely provided by distributed software running on end hosts, working in concert with vastly simplified physical network hardware. 2. They tried to solve two key shortcomings by reducing cost and by integrating the full set of middle-box functionality with every router and LAN switch by of the current middlebox-based approach. 3. They proposed a more radical, quicker and cheaper option. 4. The approach may be suitable to maintain consistency, although it does not explicitly guarantee consistency or address similar issues. But, from the approach it can be told that, with suitable tuning the systems developed based on this approach can update the state as of when required. 5. They keep the logical address translation tables at each end host in the network (or possibly a subset for scalability purposes) and ensure it is is consistent using a distributed agreement protocol. 6. Trusted computing enables us to bring end hosts into the fold by verifying that they are running a particular version of the network protocol stack. 7. The proposed system is a simple, centralized solution and replacing it with a complex, distributed one. Cons: 1. There is not much control on the middle-layer. So, it may behave in some unwanted fashion in times. 2. This systems can suffer from security related issues and robustness depends on how well they have designed this, as that does not come naturally with the systems. 3. The system in reality is much different from its design, so it is difficult to comment on the design or analyze from the design proposed in the paper. 4. It is not obvious that whether the tightly controlled network hardware is preferable or not. 5. The approach may impose restrictions on how someone use their own computers. Comments: This approach can improve Quality of Service significantly, so, it is worth studying this paper. But, they designed this approach with small networks in mind, so scalability of this approach will be limited. -With regards, Jayanta Mukherjee Department of Computer Science University of Illinois, Urbana-Champaign Urbana-61801, IL, USA Mobile:+1-217-778-6650 From: Kurchi Subhra Hazra [hazra1@illinois.edu] Sent: Thursday, April 22, 2010 9:41 AM To: Gupta, Indranil Subject: 525 review 04/22 An End to the Middle -------------------------------------------------------------------------- Summary ------------ This paper proposes shifting complex network management and routing tasks to end points in networks such as desktops, laptops and smart phones. The argument that the authors put forward in support of their proposal is that middle boxes that currently carry out such tasks are costly and not affordable by smaller organizations or networks at our homes. In addition, laptops and desktops are progressively becoming more powerful, they have more than one core and much of the processing power is unutilized. Trust at end points is no longer a pressing issue with the emergence of virtual machines and the security that they provide. They call their approach End To The Middle (ETTM), since the middle boxes are no longer in control of the network. The authors assume a Trusted Platform Module (TPM) deployed at end points that provisions resources at the host to securely manage the network. There also exists a hypervisor that creates an Attested Execution Environment (AEE), a Virtual Machine that carries out network management tasks and is protected from other loaded VMs. All incoming and outgoing network traffic is routed through the AEE which can then enforce the currently followed network policies. The authors also chalk out how neighbor discovery, routing, querying and authentication can be carried out in such a scenario. Network management is now distributed in nature. However, network policies can now talk in terms of applications and priorities can be enforced without carrying out complex heuristics. The hosts can work together in a distributed fashion to ensemble a network view. Thus, distributed consensus plays a major role here. Such a management layer on end hosts can take care of handling NATs and guaranteeing QOS. Pros -------- 1. This paper is theoretical by nature. It, however, redirects research in networks and systems with the help of current technological advances. The authors point out that since computing has changed radically now, it is more sensible to shift control and power to end hosts rather than maintaining costly middle boxes. 2. The approach enables priority to be assigned to network traffic in terms of applications. A teleconferencing video will be given higher priority than a software update. With the emerging importance of QOS, this is an important advantage. Cons --------- 1. They are proposing a radical change. It will be very difficult to get such a system working on a large scale. There are millions of computing devices already being used which are not powerful enough to support network management as a background service. 2. They propose that mobiles and smart phones can also act as end points. However, at present these are not powerful or resourceful enough to run a separate virtual machine for network management. 3. For consensus, they assume the environment of an organization. However, in the beginning they had claimed that such a system can be used in networks set up at homes. It will be difficult to achieve consensus in such a scenario. In fact, I see consensus as an important bottleneck in the paper and not enough treatment is given to this topic. 4. It would be good if they would evaluate the proposed execution environment and show how addition of such a virtual machine affects the working of the rest of the system. Thanks, Kurchi Subhra Hazra Graduate Student Department of Computer Science University of Illinois at Urbana-Champaign http://www.cs.illinois.edu/homes/hazra1/ From: ashameem38@gmail.com on behalf of Shameem [ahmed9@illinois.edu] Sent: Thursday, April 22, 2010 7:43 AM To: Gupta, Indranil Subject: 525 review 04/22 ===================================================================== End to End Arguments in System Design ===================================================================== In the paper titled “End to End Arguments in System Design”, the authors proposed a design philosophy of function placement. They argued that the functionality that was traditionally implemented at intermediate nodes (or at lower level) should be implemented at end hosts (or at higher level). By providing many examples of higher level functionality, like reliable file transfer, secure transmission of data (e.g. lower level encryption), delivery guarantees (acknowledgments), duplicate message suppression, guaranteeing FIFO, etc., the authors tried to explain the effectiveness of end-to-end argument. Since the paper was published in 1984, the proposed design philosophy becomes one of the most celebrated concepts/principles/arguments proposed in system and networking area. To me, the proposed design philosophy has given a proper guidance for system design, which is, still valid to some extent. It also ensures the proper placement of functions. On the contrary, with the exponential growth of Internet, the application requirements may vary and hence the proposed argument might not be suitable. Also, the argument doesn’t take into account the advancement of technologies. So, I think, end to end argument should not be treated as absolute, rather it should be considered nothing but a design tool. Pros: 1. This paper presents in a very convincing way why functionalities should be implemented at the upper layer of the system. 2. End to end argument provides a greater flexibility for system design. 3. By following end to end argument, core network can be simpler and faster. Cons: 1. I don't think the argument is valid for all the cases. 2. In some cases, it might be hard to define end point of a system. 3. How much trustworthy the end points are? If the end points are not trustworthy enough, can we rely on end to end argument? 4. Although end to end argument is useful for some security functions, I don't think it covers all security requirements. 5. Is it a practical assumption that failures are occasional / transient? 6. End to end doesn't guarantee congestion control. Discussion points: 1. Compare end to end argument with "principle of economy", where functions should be implemented keeping in mind the lowest cost possible. 2. To me, end to end argument should be redefined. What would be the best suitable refined argument in that case? 3. What are the possible trade-offs while following end to end argument: a. Short term performance vs. long term flexibility b. Performance vs. cost From: Virajith Jalaparti [jalapar1@illinois.edu] Sent: Thursday, April 22, 2010 7:15 AM To: Gupta, Indranil Subject: 525 Review 04/22 Review of “An End to the Middle”: The paper argues for a new network architecture in which the all complexity of the middle boxes in the network is removed and delegated to the end hosts in the system. Today’s networks use a lot of in-network functionality such a NATs, load balancers etc. thereby ensuring that networks keep up with the increasing scale and complexities in operations without effecting the end hosts of the network. However, the paper presents the case that shifting all such complexities to the end hosts would lead to a much simpler, cheaper solution which is more efficient at accomplishing the task at hand. The paper proposes techniques that leverage trusted hardware to ensure that the end hosts use a network stack that is “correct”. It further uses virtualization technologies in order to ensure that the client software that performs network functionalities is separated from the rest of the client and the client OS. The paper also proposes the use of intelligent switches that are able to perform network discovery, routing, authentication of clients and support end-host querying. This ensures that the core network is dumb and most of the functionality is forced to the end of the network. The paper further requires the end hosts to be able to participate in a distributed consensus protocol in order to agree on how they function. The paper goes on to provide more details dealing with the deployment of NATs and QoS services at the end hosts. Comments: - While the paper argues that several functionalities provided by additional middleboxes should to shifted to the end hosts in order to make networks cheaper and easier to manage, it doesn’t completely explore the complexities this would result in the end hosts. - Traditionally, in-network solutions were preferred in order to function transparently to the client. Conventionally changing end hosts has been regarded as quite a difficult task and various technologies/architectures that have emerged ensured that the clients can function unchanged while the network can reap the advantages of the newer technology. - The architecture presented in the paper requires end hosts to have intimate knowledge of the network (for example how addresses are NATed in the whole network) which would reveal internal details of the network to the end hosts. It is not obviously how network providers would deals with such violation of their internal functioning/details. This would definitely decrease the security of the network as a whole leaving it vulnerable to attacks from well-informed end hosts. - While the idea of a simple network is quite appealing, the idea proposed in this paper leads to quite a complex end host: such a change is quite unlikely and difficult unless it provides strong efficiency guarantees. -- Virajith Jalaparti PhD Student, Computer Science University of Illinois at Urbana-Champaign Web: http://www.cs.illinois.edu/homes/jalapar1/ From: liangliang.cao@gmail.com on behalf of Liangliang Cao [cao4@illinois.edu] Sent: Thursday, April 22, 2010 3:13 AM To: Gupta, Indranil Subject: 525 review 04/22 CS525 reviewed on The Middle or The End Liangliang Cao (cao4@illinois.edu) April 22, 2010 Paper 1: An End to the Middle, C. Dixon et al, Usenix HotOS 2009.. Despite its success in the past few years (e.g., NATs, packet shapers, firewalls, VPNs/proxies/IDSs), the costs to buy and run middlboxes are still too high and the perimeter-based structure is too complex for small business. This paper propose to using open software running on end hosts instead of using proprietary middlebox harware as the dominant tool for managing networks toward. It claims that a centralized system is more complex than a distributed one, while the former have to consider fault-tolerant and pervasive policy enforcement across a complex network and coordinate multiple machines. This paper securely enlists endpoints, automatically scales management resources, and also uses standard distributed systems techniques to provide fault tolerance in the face of unreliable end hosts. The authors use several convincing examples to support their argument. Pros: • Results show that NATs and traffic prioritization can be more cheaply, flexibly, and securely provided by distributed software running on end hosts with vastly simplified physical network hardware. • The way of borrowing OS and distributed system to the field network seems promising. Cons • There are still not clear whether all the middloxes can be replaced by the proposed architecture. • This paper does not provide a rigid comparison of efficiency. It will be interesting to examine whether the middle-level based solution works better in large scale systems. From: Wucherl Yoo [arod99@gmail.com] Sent: Wednesday, April 21, 2010 11:17 PM To: Gupta, Indranil Subject: 525 Review 4/22 The Middle or the End? , Wucherl Yoo (wyoo5) End to end arguments in system design, Saltzer, Reed and Clark, 1984 Summary: This paper claims that placing function in a system at low level may be redundant of little value considering the cost. This argument is useful for the communication subsystem that is often designed as “layers.” The end-to-end argument can be viewed as design principles to reduce the temptation to provide more functions than necessary. According to this argument, it is unnecessary to implement functions at low level when the functions can be completely implemented and placed at the both ends (high-level) such as application level of communication protocols. The authors present several examples to back up the argument. Error detection and recovery can be placed at both ends by checking errors using checksum and retransmitting packets if error exists by both ends. Similarly, delivery guarantees, secure transmission of data, duplicate message suppression, and transaction management can be placed at the both ends. Pros: 1. Low overhead, less implementation effort due to the simplicity of not considering low level when implementing end to end approach. 2. Good to maintain application-specific property that cannot be seen on low-level layer. Cons: 1. Latency is increased compared with placed at low level such as error detection latency. This also causes more resource consumptions. 2. Less reusability of implemented code – standardized low level code can be easily shared by higher level implementation. Nowadays most applications use well-known libraries and network protocols are standardized and provide many features. Thus implementing end-to-end fashion may be redundant. 3. Persistent failures cannot be handled well by end-to-end approach such as malicious attacks. -Wucherl From: gildong2@gmail.com on behalf of Hyun Duk Kim [hkim277@illinois.edu] Sent: Wednesday, April 21, 2010 9:43 PM To: Gupta, Indranil Subject: 525 review 04/22 525 review 04/22 Hyun Duk Kim (hkim277) * An End to the Middle, C. Dixon et al, Usenix HotOS 2009. This paper suggests ETTM (end to the middle) approach for managing network using proprietary middlebox hardware. For last decades, various middleboxes were used to solve various problems in networking; for example, NAT, VPN, IDSs and so on. This paper suggests using proprietary middle box hardware as the dominant tool for managing networks toward using open software running on end hosts. Authors argue that this distributed way of management is more cheap, flexible and secure. This paper explains how it can be structured. This paper provides fresh insight to current mannerism. Nowadays, middlewares are everywhere. We feel this natural, and we think they always had been there. This paper suggests a system can be distributed without them. This is a kind of change of conception. However, the solution suggested may be limited to small business. Not using main controlling system, use physical layers... According to the suggested solution, it looks like all systems should be close enough to be touched and managed. This makes the range of application of the suggested methods narrow. Also, authors did not show clear benefits and actual implementation. Although authors explained detail architecture, they did not show actual practice. It would be great to show their own actual build of the proposed system, and discuss pros and cons found in the real execution. Without the actual experiments, paper rather looks like discussing only ideals. ------ Best Regards, Hyun Duk Kim Ph.D. Candidate Computer Science University of Illinois at Urbana-Champaign http://gildong2.com From: Nathan Dautenhahn [dautenh1@illinois.edu] Sent: Wednesday, April 21, 2010 12:22 PM To: Gupta, Indranil Subject: 525 review 04/22 ***************************************************** * CS 525 Reviews -- 4.22.10 -- Nathan Dautenhahn * ***************************************************** 1. End-To-End Arguments in System Design Authors: J.H. Saltzer, D.P. Reed and D.D. Clark 1.1 Summary and Overview This paper proposes and argument in favor of the end-to-end approach for computer systems design. The authors approach the problem by providing several examples of how an end-to-end design methodology is better than allowing function points to exist at intermediate steps of a system. 1.2 Contributions This paper identifies the problem of the end-to-end argument, and lays the groundwork for understanding this issue. The paper was published at a very early stage of Internet and computer systems development, and does an amazing job of noting important issues, that as we can see twenty-five years later are essential. 1.3 Limitations - The problem of the end-to-end argument is ill defined at the beginning of the paper. I had a hard time understanding what exactly the authors were wanting to apply the end-to-end argument to. I think this could potentially be due to the fact that the authors are likely inventing this problem, and as such makes it hard to nail down. Nevertheless, could have been defined more explicitly. - No experimental data. The authors make a lot of good arguments, but without any experimental data we have no idea whether or not there arguments are sound. They don't even use theory. 1.4 Comments This paper introduces some really interesting and hard discussion points. I will list a few of the ones that seem most important to me: - It appears as though the authors are almost at an early stage identifying the network stack. I think that without the network stack their argument is not very good because of the nature of the transmission errors that are common to all applications. - This paper has really helped me gain some intuition as to how the network layered concept came about. It is interesting to note that as you go up the stack the less end-to-end you get and the more application specific you get. For example the transport layer does some really cool things that are specific, but useful. - The single most impacting statement of this paper is the notion that anything included in lower levels of the subsystem will be forced upon all users of that system. This is one of the best arguments for the end-to-end argument. It is important to give system designers flexibility. From: Ashish Vulimiri [vulimir1@illinois.edu] Sent: Saturday, April 17, 2010 12:46 AM To: Gupta, Indranil Subject: 525 review 04/22 Saltzer, Reed, Clark -- End-to-end arguments in system design: This paper presents the end-to-end design principle, which essentially states that, in general, all functionality must be moved as high up the network stack as possible. The authors' primary argument for this principle is that a lot of network functions can never be provided completely by the network, and since support from the network does not reduce the burden on the application in absolute terms (note however the performance-related caveat at the end of this paragraph), it largely amounts to wasted effort which has the added disadvantage of dragging down the flexibility of the network. They instantiate this argument for various network functions, such as reliable data transfer (only the end-application can ultimately verify if a message has been transferred correctly or not), delivery guarantees, and security requirements. However, they do note that performance considerations can sometimes outweigh the argument for simplicity, and suggest that the tradeoff between these two must be considered carefully for each application. Comments: * Dual to the argument above, there are also network functions which can never provided purely on an end-to-end basis, without support from the intermediate network. An example would be QoS guarantees. * Passing control to the end-users may not always be wise in an untrusted network. For example, denial-of-service attacks would be a lot more difficult if congestion control were implemented in the network, as opposed to at the end-hosts, as is done now. * Services that are standardized and provided in the network would probably receive wider testing (due to a larger userbase) than services implemented by specific end-applications, which would imply that any bugs would be more likely to be detected in a network implementation. * Having everything implemented by the endhosts might lead to unnecessary duplication of effort.