Chapter 4 Network Mobility (Chapter Four: Network Mobility)

xiaoxiao2021-04-08  4

First, the network mobility will inevitably, the computer industry is ruled by the mainframe before the PC era. At that time, the mainframe provided a service to multiple terminals through a minute system. However, with the development of the microprocessor, the PC gradually stepped into the historical stage. In the early days, the PC is always working on a closed islands, and the software is also working on separate PCs. But soon, the PC began to connect to the network and born with the customer / server software mode. Customer / Server mode is ultimately developed into multi-layer service models (N-Tier), which is called distributed processing (Distributed Processing). Distributed processing promotes the development of the network, allowing different users to share data and information. Although the distributed computing model is advanced than the centralized computing model of the large machine era, it also brings a lot of problems. The most prominent even is difficult to publish the program. In the centralized computing model, you only need to update the programs in the mainframe, then all end users are updated; but in the distributed computing model, you may have to synchronize the version of the programs in all PCs. Although the browser / server model has improved this shortcomings, it is difficult to reach the customer's request. The arrival of Java has brought a new computing model (network mobility), and the program is automatically released through the network in this computing model. The web server will send the program to the end user when the user needs to run the program. Second, a new software paradigm PC function is increasingly reduced by the growing calculation model to conversion from the centralized computational model of the main machine, and with the increasing increase in network bandwidth, it will drive distribution The calculation model calculates the model conversion to the network. In the new computing model, when the user performs a program, the final user is sent to the final user and the data, which is combined as "content". In a distributed computing model, if there is a serious error in the new version of the program, the end user can reject the update, continue to use the old version. However, in the network computing model, it is not possible, because all programs are released automatically, and users cannot control the programs. The solution is to provide multiple versions of "Content Services". At least two versions of the program, a stable version, a beta branch, so that the end user only chooses its own program version, which makes your own programs more robust. When providing a variety of "Content Services", end users should worry about the version of the program. End users must understand all versions of information and have decided whether they want to update the version. If the end user can't control the version of the program, they may feel that they become "slaves of the program". Many automatic updated "Content Services" will represent two features: powerful features and simple user interfaces. Just like a toaster, when you encounter a new toaster, you don't want to read his manual, but look forward to putting the bread directly from top and open the switch, waiting for bread to roast. Similarly, many "Content Services" also offer powerful features and simple user interfaces. A good example of a "Content Services" is a web page. If you look at the source file of HTML, you will find that he and other programs have no difference, and when you look at the browser, you find a beautiful web page. This limitations and data are bound to be blurred. When he browses the webpage, they will not consider whether the webpage is updated, but only and browse the same website address. In the software domain, new network computing models, it is impossible to completely replace the old computing model, he can only take part of the part.


New Post(0)