There are two distinct phases in the design of a website. The first is to determine what the client requires, and the second is to implement that in a technically appropriate manner. Sometimes this may involve a certain amount of interaction as different ideas are tried, but the two phases are still distinct and are outlined separately below.
The first stage is to identify what is distinctive about the client and what their customers will be looking for. This must be kept in mind at all times if the finished website is to achieve the intended purpose.
The second is to plan the overall concept in accordance with those requirements.
Both of these will normally require extensive consultation which, depending on the circumstances, may be in person, by telephone, but most usefully in writing by E-mail. Test sites may need to be built to illustrate concepts and comments and revisions may need to continue for some time to optimise the project, but this investment of time will help to maximise the benefit and lifetime of the resultant site.
From the technical viewpoint design consists of using various techniques in the appropriate combination to achieve the desired result. Factors such as overall appearance and ease of maintenance will both affect and depend on that, as will the flexibility of the site with respect to cross-browser issues and multiple window sizes.
A brief summary of the technical aspects follows for those who are interested:
The Worlwide Web Consortium (W3C) is the body which oversees web standards. Technically, all it does is publish draft standards which are “Referred for Comment”. However, following these RFCs is generally considered good practice. RFCs cover most of the common web techniques, including HTML and CSS which are explained below.
The program which listens to the Internet connection and serves files to machines which request them. The term is also used loosely to apply to the machine on which the program is running though, strictly, the Internet uses Peer to Peer Protocol (PPP) which means any machine can act in either capacity on the same connection.
The program, in this case usually a web browser, which sends requests to machines which can respond with data files. Again the term is loosely applied to the machine running the client, and again this is technically incorrect as under PPP there is no real distinction between machines.
HyperText Markup Language is the usual core language in which pages are written. Its name is intended to be descriptive and indicates that it is for annotating text rather than determining layout. Its correct use, therefore, is to mark (or “tag”) pieces of text according to their logical function to enable a web browser to make sensible decisions about how to render them. This is important because the browser may have limited space in which to display the content, a limited number of available type faces, or a limited range of available colours. It might also not be a visual device at all, but a “robot” for processing data automatically or a reading device for speaking pages aloud to a listener.
It also has tags to enable visual or audio content to be included where appropriate.
Cascading Style Sheets are lists of recommendations to browsers about how certain kinds of content should be displayed. They can cover almost anything from the sizes of headings to the positioning of pictures or the voice to be used when reading something out. They are not mandatory and the user (viewer) may set alternative instructions according to his or her own taste, but they are generally followed in the absence of such instructions. “Cascading” means that a series of these instruction sets may be given and later ones will revise or replace earlier ones.
Browsers differ in how well they follow CSS instructions. Internet Explorer is notorious for interpreting them differently from the RFC, but is getting better with every major release.
This is a programming language usually used for processing data on the “client side”, that is, in the browser. Uses include animation and checking content before submitting a form. It is severely limited in its ability to access the client machine to reduce the likelihood of malicious use. Users can set their browsers to ignore it, so it cannot be relied upon for critical functions.
This is necessary because websites are an illusion created by the browser; in reality every request to the server is a separate transaction and the server has no memory of what has led to it. It is the browser that works out what to request and how to display it when it has received it. Cookies provide that missing memory, but because they can be disabled in the browser they cannot be relied on for critical work.
|Published in the UK by |