Java performance tuning oreilly pdf

Date published 

 

Where those designations appear in this book, and O'Reilly Media, Inc. .. Java performance covers both of these areas: tuning flags for the. Java Performance Tuning provides all the details you need to know to " performance tune" any type of Java program and make Java code run significantly faster. mystical poems of rumi Translated from the Persian by maroc-evasion.infoy Annotated and prepared Mystical Poems of Rumi Designing for Internet of things.

Author:TERI FRANKSON
Language:English, Spanish, Portuguese
Country:Madagascar
Genre:Biography
Pages:145
Published (Last):20.01.2016
ISBN:708-9-80600-591-1
Distribution:Free* [*Registration needed]
Uploaded by: CLEMENTINA

55234 downloads 117470 Views 19.46MB PDF Size Report


Java Performance Tuning Oreilly Pdf

For this reason, Java Performance Tuning, Second Edition includes . editions are also available for most titles (maroc-evasion.info). For more The slower VMs benefit from manual unrolling, whereas the faster, server-mode VMs still. Reader Reviews. •. Errata. Java™ Performance Tuning, 2nd Edition. By. Jack Shirazi. Publisher.: O'Reilly. Pub Date.: January ISBN.: Java, the cover image, and related trade dress are trademarks of O'Reilly Media,. Inc. . Performance tuning is a synthesis between technology, methodology.

Would you like to tell us about a lower price? If you are a seller for this product, would you like to suggest updates through seller support? Book by Jack Shirazi. Read more Read less. Customers who viewed this item also viewed. Page 1 of 1 Start over Page 1 of 1.

The user of an application sees changes as part of the performance. A browser that gives a running countdown of the amount left to be downloaded from a server is seen to be faster than one that just sits there, apparently hung, until all the data is downloaded. People expect to see something happening, and a good rule of thumb is that if an application is unresponsive for more than three seconds, it is seen as slow.

Some Human Computer Interface authorities put the user patience limit at just two seconds; an IBM study from the early '70s suggested people's attention began to wander after waiting for more than just one second. A few long response times make a bigger impression on the memory than many shorter ones. With a typical exponential distribution, the 90th percentile value is 2. Consequently, as long as you reduce the variation in response times so that the 90th percentile value is smaller than before, you can actually increase the average response time, and the user will still perceive the application as faster.

For this reason, you may want to target variation in response times as a primary goal. Unfortunately, this is one of the more complex targets in performance tuning: it can be difficult to determine exactly why response times are varying. If the interface provides feedback and allows the user to carry on other tasks or abort and start another function preferably both , the user sees this as a responsive interface and doesn't consider the application as slow as he might otherwise.

If you give users an expectancy of how long a particular task might take and why, they often accept this and adjust their expectations. Modern web browsers provide an excellent example of this strategy in practice. People realize that the browser is limited by the bandwidth of their connection to the Internet and that downloading cannot happen faster than a given speed. Good browsers always try to show the parts they have already received so that the user is not blocked, and they also allow the user to terminate downloading or go off to another page at any time, even while a page is partly downloaded.

Generally, it is not the browser that is seen to be slow, but rather the Internet or the server site. In fact, browser creators have made a number of tradeoffs so that their browsers appear to run faster in a slow environment.

I have measured browser display of identical pages under identical conditions and found browsers that are actually faster at full page display but seem slower because they do not display partial pages, download embedded links concurrently, and so on.

Modern web browsers provide a good example of how to manage user expectations and perceptions of performance. However, one area in which some web browsers have misjudged user expectation is when they give users a momentary false expectation that operations have finished when in fact another is to start immediately. This false expectation is perceived as slow performance.

This frustrates users who initially expected the completion time from the first download report and had geared themselves up to do something, only to have to wait again often repeatedly. A better practice would be to report on how many elements need to be downloaded as well as the current download status, giving the user a clearer expectation of the full download time. Where there are varying possibilities for performance tradeoffs e.

It is better to provide the option to choose between faster performance and better functionality. When users have made the choice themselves, they are often more willing to put up with actions taking longer in return for better functionality.

When users do not have this control, their response is usually less tolerant. This strategy also allows those users who have strong performance requirements to be provided for at their own cost.

But it is always important to provide a reasonable default in the absence of any choice from the user. Where there are many different parameters, consider providing various levels of user-controlled tuning parameters, e. This must, of course, be well documented to be really useful.

This time can be used to anticipate what the user wants to do using a background low-priority thread , so that precalculated results are ready to assist the user immediately. This makes an application appear blazingly fast. Similarly, ensuring that your application remains responsive to the user, even while it is executing some other function, makes it seem fast and responsive. For example, I always find that when starting up an application, applications that draw themselves on screen quickly and respond to repaint requests even while still initializing you can test this by putting the window in the background and then bringing it to the foreground give the impression of being much faster than applications that seem to be chugging away unresponsively.

O'Reilly Books on Web Performance

Starting different word-processing applications with a large file to open can be instructive, especially if the file is on the network or a slow removable disk. Some act very nicely, responding almost immediately while the file is still loading; others just hang unresponsively with windows only partially refreshed until the file is loaded; others don't even fully paint themselves until the file has finished loading.

This illustrates what can happen if you do not use threads appropriately. In Java, the key to making an application responsive is multithreading. Use threads to ensure that any particular service is available and unblocked when needed. Of course, this can be difficult to program correctly and manage. Handling interthread communication with maximal responsiveness and minimal bugs is a complex task, but it does tend to make for a very snappily built application.

For example, a request to list all the details on all the files in a particular large directory may not fit on one display screen.

The usual way to display this is to show as much as will fit on a single screen and indicate that there are more items available with a scrollbar.

Other applications or other information may use a "more" button or have other ways of indicating how to display or move on to the extra information. In these cases, you initially need to display only a partial result of the activity. This tactic can work very much in your favor. For activities that take too long and for which some of the results can be returned more quickly than others, it is certainly possible to show just the first set of results while continuing to compile more results in the background.

This gives the user an apparently much quicker response than if you were to wait for all the results to be available before displaying them. This situation is often the case for distributed applications. A well-known example is again! The general case is when you have a long activity that can provide results in a stream so that the results can be accessed a few at a time.

For distributed applications, sending all the data is often what takes a long time; in this case, you can build streaming into the application by sending one screenful of data at a time. Also, bear in mind that when there is a really large amount of data to display, the user often views only some of it and aborts, so be sure to build in the ability to stop the stream and restore its resources at any time.

Caching is an optimization technique I return to in several different sections of this book when appropriate to the problem under discussion.

Some caches cannot be tuned at all; others are tuneable at the operating-system level or in Java. Where it is possible for a developer to take advantage of or tune a particular cache, I provide suggestions and approaches that cover the caching technique appropriate to that area of the application.

In cases where caches are not directly tuneable, it is still worth knowing the effect of using the cache in different ways and how this can affect performance.

For example, disk hardware caches almost always apply a readahead algorithm: the cache is filled with the next block of data after the one just read. This means that reading backward through a file in chunks is not as fast as reading forward through the file. Caches are effective because it is expensive to move data from one place to another or to calculate results.

If you need to do this more than once to the same piece of data, it is best to hang onto it the first time and refer to the local copy in the future. This applies, for example, to remote access of files such as browser downloads. The browser caches the downloaded file locally on disk to ensure that a subsequent access does not have to reach across the network to reread the file, thus making it much quicker to access a second time.

It also applies, in a different way, to reading bytes from the disk. Here, the cost of reading one byte for operating systems is the same as reading a page usually 4 or 8 KB , as data is read into memory a page at a time by the operating system. If you are going to read more than one byte from a particular disk area, it is better to read in a whole page or all the data if it fits on one page and access bytes through your local copy of the data.

General aspects of caching are covered in more detail in Section Caching is an important performancetuning technique that trades space for time, and it should be used whenever extra memory space is available to the application. Before you start tuning, it is crucial to identify the target response times for as much of the system as possible. At the outset, you should agree with your users directly if you have access to them, or otherwise through representative user profiles, market information, etc.

The performance should be specified for as many aspects of the system as possible, including: Multiuser response times depending on the number of users if applicable Systemwide throughput e. Otherwise, you will not know where to target your effort, how far you need to go, whether particular performance targets are achievable at all, and how much tuning effort those targets may require. But most importantly, without agreed targets, whatever you achieve will tend to become the starting point.

The following scenario is not unusual: a manager sees horrendous performance, perhaps a function that was expected to be quick, but takes seconds. His immediate response is, "Good grief, I expected this to take no more than 10 seconds. The manager's response is now, "Ah, that's more reasonable, but of course I actually meant to specify 3 seconds—I just never believed you could get down so far after seeing it take seconds. Now you can start tuning.

Agreeing on targets before tuning makes everything clear to everyone. These are precise specifications stating what part of the code needs to run in what amount of time. Without first specifying benchmarks, your tuning effort is driven only by the target, "It's gotta run faster," which is a recipe for a wasted return. You must ask, "How much faster and in which parts, and for how much effort?

You must specify target times for each benchmark. You should specify ranges: for example, best times, acceptable times, etc. These times are often specified in frequencies of achieving the targets.

Note that the earlier section on user perceptions indicates that the user will see this function as having a 5-second response time the 90th percentile value if you achieve the specified ranges.

You should also have a range of benchmarks that reflect the contributions of different components of the application. If possible, it is better to start with simple tests so that the system can be understood at its basic levels, and then work up from these tests. In a complex application, this helps to determine the relative costs of subsystems and which components are most in need of performance-tuning.

The following point is critical: Without clear performance objectives, tuning will never be completed. This is a common syndrome on single or small group projects, where code keeps being tweaked as better implementations or cleverer code is thought up.

Your general benchmark suite should be based on real functions used in the end application, but at the same time should not rely on user input, as this can make measurements difficult. Any variability in input times or any other part of the application should either be eliminated from the benchmarks or precisely identified and specified within the performance targets. There may be variability, but it must be controlled and reproducible.

However, because their focus tends to be on robustness testing, many tools interfere with the application's performance, and you may not find a tool you can use adequately or cost-effectively. If you cannot find an acceptable tool, the alternative is to build your own harness. In addition, some Java profilers are listed in Chapter Your benchmark harness can be as simple as a class that sets some values and then starts the main method of your application.

A slightly more sophisticated harness might turn on logging and timestamp all output for later analysis. GUI-run applications need a more complex harness and require either an alternative way to execute the graphical functionality without going through the GUI which may depend on whether your design can support this , or a screen event capture and playback tool several such tools exist[3]. In any case, the most important requirement is that your harness correctly reproduce user activity and data input and output.

Normally, whatever regression-testing apparatus you have and presumably are already using can be adapted to form a benchmark harness. Robot class, which provides for generating native system-input events, primarily to support automated testing of Java GUIs.

The benchmark harness should not test the quality or robustness of the system. Operations should be normal: startup, shutdown, and uninterrupted functionality. The harness should support the different configurations your application operates under, and any randomized inputs should be controlled, but note that the random sequence used in tests should be reproducible.

You should use a realistic amount of randomized data and input. It is helpful if the benchmark harness includes support for logging statistics and easily allows new tests to be added. The harness should be able to reproduce and simulate all user input, including GUI input, and should test the system across all scales of intended use up to the maximum numbers of users, objects, throughputs, etc.

You should also validate your benchmarks, checking some of the values against actual clock time to ensure that no systematic or random bias has crept into the benchmark harness.

For the multiuser case, the benchmark harness must be able to simulate multiple users working, including variations in user access and execution patterns. Without this support for variations in activity, the multiuser tests inevitably miss many bottlenecks encountered in actual deployment and, conversely, do encounter artificial bottlenecks that are never encountered in deployment, wasting time and resources.

It is critical in multiuser and distributed applications that the benchmark harness correctly reproduce user-activity variations, delays, and data flows. The benchmarks should be run multiple times, and the full list of results retained, not just the average and deviation or the ranged percentages.

Also note the time of day that benchmarks are being run and any special conditions that apply, e. Sometimes the variation can give you useful information. It is essential that you always run an initial benchmark to precisely determine the initial times. This is important because, together with your targets, the initial benchmarks specify how far you need to go and highlight how much you have achieved when you finish tuning.

It is more important to run all benchmarks under the same conditions than to achieve the end-user environment for those benchmarks, though you should try to target the expected environment. It is possible to switch environments by running all benchmarks on an identical implementation of the application in two environments, thus rebasing your measurements. But this can be problematic: it requires detailed analysis because different environments usually have different relative performance between functions thus your initial benchmarks could be skewed compared with the current measurements.

Each set of changes and preferably each individual change should be followed by a run of benchmarks to precisely identify improvements or degradations in the performance across all functions. A particular optimization may improve the performance of some functions while at the same time degrading the performance of others, and obviously you need to know this.

Each set of changes should be driven by identifying exactly which bottleneck is to be improved and how much of a speedup is expected. Rigorously using this methodology provides a precise target for your effort. You need to verify that any particular change does improve performance. It is tempting to change something small that you are sure will give an "obvious" improvement, without bothering to measure the performance change for that modification because "it's too much trouble to keep running tests".

But you could easily be wrong. Jon Bentley once discovered that eliminating code from some simple loops can actually slow them down. Dobb's Journal, May An empty loop in C ran slower than one that contained an integer increment operation. The benchmark suite should not interfere with the application. Be on the lookout for artificial performance problems caused by the benchmarks themselves.

This is very common if no thought is given to normal variation in usage. A typical situation might be benchmarking multiuser systems with lack of user simulation e. Be careful not to measure artificial situations, such as full caches with exactly the data needed for the test e.

There is little point in performing tests that hit only the cache, unless this is the type of work the users will always perform. When tuning, you need to alter any benchmarks that are quick under five seconds so that the code applicable to the benchmark is tested repeatedly in a loop to get a more consistent measure of where any problems lie. By comparing timings of the looped version with a single-run test, you can sometimes identify whether caches and startup effects are altering times in any significant way.

Optimizing code can introduce new bugs, so the application should be tested during the optimization phase. A particular optimization should not be considered valid until the application using that optimization's code path has passed quality assessment. Optimizations should also be completely documented. It is often useful to retain the previous code in comments for maintenance purposes, especially as some kinds of optimized code can be more difficult to understand and therefore to maintain.

It is typically better and easier to tune multiuser applications in single-user mode first. Occasionally, though, there will be serious conflicts that are revealed only during multiuser testing, such as transaction conflicts that can slow an application to a crawl. These may require a redesign or rearchitecting of the application. For this reason, some basic multiuser tests should be run as early as possible to flush out potential multiuser-specific performance problems.

Tuning distributed applications requires access to the data being transferred across the various parts of the application. At the lowest level, this can be a packet sniffer on the network or server machine. One step up from this is to wrap all the external communication points of the application so that you can record all data transfers. Relay servers are also useful. These are small applications that just reroute data between two communication points.

Most useful of all is a trace or debug mode in the communications layer that allows you to examine the higher-level calls and communication between distributed parts.

You should use this measurement to specify almost all benchmarks, as it's the real-time interval that is most appreciated by the user. There are certain situations, however, in which system throughput might be considered more important than the wall-clock time, e.

The obvious way to measure wall-clock time is to get a timestamp using System. This works well for elapsed time measurements that are not short. You can measure: [5] System.

Any measurement including the two calls needed to measure the time difference should be over an interval greater than milliseconds to ensure that the cost of the System. I generally recommend that you do not make more than one time measurement i. CPU time the time allocated on the CPU for a particular procedure The number of runnable processes waiting for the CPU this gives you an idea of CPU contention Paging of processes Memory sizes Disk throughput Disk scanning times Network traffic, throughput, and latency Transaction rates Other system values However, Java doesn't provide mechanisms for measuring these values directly, and measuring them requires at least some system knowledge, and usually some application-specific knowledge e.

You need to be careful when running tests with small differences in timings. The first test is usually slightly slower than any other tests. Try doubling the test run so that each test is run twice within the VM e.

There are almost always small variations between test runs, so always use averages to measure differences and consider whether those differences are relevant by calculating the variance in the results. For distributed applications , you need to break down measurements into times spent on each component, times spent preparing data for transfer and from transfer e.

Each separate machine used on the networked system needs to be monitored during the test if any system parameters are to be included in the measurements. Timestamps must be synchronized across the system this can be done by measuring offsets from one reference machine at the beginning of tests.

Taking measurements consistently from distributed systems can be challenging, and it is often easier to focus on one machine, or one communication layer, at a time. This is usually sufficient for most tuning.

As they say, "If it ain't broke, don't fix it. The second most efficient tuning is to discard work that doesn't need doing. It is not at all uncommon for an application to be started with one set of specifications and to have some of the specifications change over time.

Many times the initial specifications are much more generic than the final product.

[PDF] String Concatenation Optimization on Java Bytecode - Semantic Scholar

However, the earlier generic specifications often still have their stamps in the application. I frequently find routines, variables, objects, and subsystems that are still being maintained but are never used and never will be used because some critical aspect is no longer supported. These redundant parts of the application can usually be chopped without any bad consequences, often resulting in a performance gain. In general, you need to ask yourself exactly what the application is doing and why.

Then question whether it needs to do it in that way, or even if it needs to do it at all. If you have third-party products and tools being used by the application, consider exactly what they are doing. Try to be aware of the main resources they use from their documentation. For example, a zippy DLL shared library that is speeding up all your network transfers is using some resources to achieve that speedup. You should know that it is allocating larger and larger buffers before you start trying to hunt down the source of your mysteriously disappearing memory.

Then you can realize that you need to use the more complicated interface to the DLL that restricts resource usage rather than a simple and convenient interface.

And you will have realized this before doing extensive and useless object profiling because you would have been trying to determine why your application is being a memory hog. When benchmarking third-party components, you need to apply a good simulation of exactly how you will use those products. Determine characteristics from your benchmarks and put the numbers into your overall model to determine if performance can be reached. Be aware that vendor benchmarks are typically useless for a particular application.

Break your application down into a hugely simplified version for a preliminary benchmark implementation to test third-party components. You should make a strong attempt to include all the scaling necessary so that you are benchmarking a fully scaled usage of the components, not some reduced version that reveals little about the components in full use.

Ensure performance objectives are clear. Specify target response times for as much of the system as possible. Specify all variations in benchmarks, including expected response ranges e. Include benchmarks for the full range of scaling expected e. Specify and use a benchmark suite based on real user behavior. This is particularly important for multiuser benchmarks. Agree on all target times with users, customers, managers, etc.

Make your benchmarks long enough: over five seconds is a good target. Use elapsed time wall-clock time for the primary time measurements.

Ensure the benchmark harness does not interfere with the performance of the application. Run benchmarks before starting tuning, and again after each tuning exercise.

Take care that you are not measuring artificial situations, such as full caches containing exactly the data needed for the test. Break down distributed application measurements into components, transfer layers, and network transfer times. Tune systematically: understand what affects the performance; define targets; tune; monitor and redefine targets when necessary.

Approach tuning scientifically: measure performance; identify bottlenecks; hypothesize on causes; test hypothesis; make changes; measure improved performance. Accurately identify the causes of the performance problems before trying to tune them.

Use the strategy of identifying the main bottlenecks, fixing the easiest, then repeating.

Don't tune what does not need tuning. Avoid "fixing" nonbottlenecked parts of the application.

O'Reilly Books on Web Performance

Measure that the tuning exercise has improved speed. Target one bottleneck at a time. The application running characteristics can change after each alteration. Improve a CPU limitation with faster code, better algorithms, and fewer short-lived objects. Improve a system-memory limitation by using fewer objects or smaller long-lived objects. Work with user expectations to provide the appearance of better performance. Avoid giving users a false expectation that a task will be finished sooner than it will.

Reduce the variation in response times. Bear in mind that users perceive the mean response time as the actual 90th percentile value of the response times. Keep the user interface responsive at all times. Aim to always give user feedback. The interface should not be dead for more than two seconds when carrying out tasks. Provide the ability to abort or carry on alternative tasks. Provide user-selectable tuning parameters where this makes sense.

Use threads to separate potentially blocking functions. Calculate "look-ahead" possibilities while the user response is awaited. Provide partial data for viewing as soon as possible, without waiting for all requested data to be received. Cache locally items that may be looked at again or recalculated.

Quality-test the application after any optimizations have been made. Document optimizations fully in the code. Retain old code in comments. Profiling Tools If you only have a hammer, you tend to see every problem as a nail. I have used many different tools for performance tuning, and so far I have found the commercially available profilers to be the most useful. These tools are usually available free for an evaluation period, and you can quickly tell which you prefer using.

If your budget covers it, it is worth getting several profilers: they often have complementary features and provide different details about the running code.

I have included a list of profilers in Chapter All profilers have some weaknesses, especially when you want to customize them to focus on particular aspects of the application. Another general problem with profilers is that they frequently fail to work in nonstandard environments. Lindsey, J. Tolliverand, T. Lindblad - JavaTech. An Introduction to Scientific and Technical.

Accelerate Your Web Application Deve. Servlet and JSP Bookstore. From Problem Solving to Java.

Core Java, Vol. I, Fundamentals, 8ed, Prentice-Hall. EE - Spring Patterns A Guided Tour. How to Design Graphical Aplication with Eclipse 3. Patterns AND Practices. How Tests Drive the Code.

A Java Developers Guide. EJB 3. A Developer's Notebook. Java Threads. Third Edition. From VDM to Java. How to Program. Volume I-Fundamentals. Volume 1. Core Technologies, 2nd Edition.

An Adaptive Foundation for Enterprise Applications.

Similar files:


Copyright © 2019 maroc-evasion.info. All rights reserved.
DMCA |Contact Us