Optimization of client-server applications and features of their development to use in the service model. Part 2.

Let’s discuss the key development moments that affect the performance, resource intensity, reliability and correctness of application solution.
Many of these moments are already fixed as the standards in the System of standards and techniques of configuration development for the 1С:Enterprise 8 platform. Some of these moments are simply historically established practices of correct or, inversely, incorrect use of platform capabilities.
Optimization of client-server applications and features of their development to use in the service model. Part 1.

Web-client support

The basic idea associated with web-client support is that the application solution must be fully operational in the web-client, if we want it to work in the service. There is a simple way to do this, if to use only the web-client during the development and debug. This method almost guarantees that the application developed in such way will be operational in the web-client.

It is possible to use in the web-client the extension to work with files. But the experience shows that sometimes it is used as a single option choice and this is not very correct. It is required to use it only to improve the usability of application, not as the only option. Otherwise, it turns out that if the use has no this extension, he will not be able to use the functionality of application solution. And the user can not really have this extension for various reasons. Because in the half of browsers supported by the platform there are no such extensions at all. And this situation is improved only in release 8.3. Because if even an extension exists, not all the users can install it for themselves. Someone does not have the rights to install the additional components in the browser. Someone just does not want to install it for their own reasons.

Interface performance

To estimate the work speed of interface, it is possible to measure the maximal response time for one interactive user action. It is required to make every effort to ensure that it does not exceed 1 second.

How to measure this second? We have the performance measures, we can try to use them. It is also possible to try to measure the response time using the additional script code. This is all wrong. As a result, we will not obtain the time that the user sees in practice. Especially it will be wrong in the web-client in which there are a large amount of asynchrony. As a result, the time when the applied code will begin to be executed (for example, the idle handler is triggered) may be significantly different from the time when the form will really be drawn and it will be possible to use it.

Therefore, it is recommended to take stopwatch for such measurements and to measure the response time using it.

Where to measure the response time? There is no sense to do this on the computer used for development. These computers are very good unlike those that the users have. In practice, the users have the outdated, as times go, computers. Although, the users themselves do not consider them bad and are sure that they can still be used.

In addition to not very good computer, it is desired to install some browser for the measurements which has a fairly slow JavaScript. If to check on the “fast” browser, we can not see many problems. Actually, probably the half of entire performance depends on the browser. If to take the slow computer and install on it a fast browser, it will work no worse than the fast computer and the slow browser.

Also, to receive more adequate results, it is desired to use the mode of server call delay simulate and the mode of low speed connection. Although, if a very good testing is required, it is the best to take the modem and check everything in a real mobile GPRS. Because everything works in it sometimes also worse than expected.

Factors that influence web-client performance

What does determine the interface performance? The main thing is the client-server calls. Their number must be minimized. By what means? By packaging multiple calls into one as far as possible. This recommendation is associated not only with web-client, but with a thin client too.

What else should you pay attention to?

Firstly, the software modification of the form. It can have a very negative impact on the form performance. The multilevel caching is used in the platform and, particularly, the form description is cached too. Under normal conditions (if the form does not contain any code), when the user opens it for the first time, it is downloaded on his computer. During the repeated access, it will not be downloaded on his computer, it will be raised from the browser cache. Now, if we will start to change the form, two negative points will appear. During each server query which will open this form together with respond which contains data the form changes will come to the client. They are not cached. The second negative point is that the workout of software form changes takes time. And the form starts running slower.

Secondly, the complex forms work slower by themselves. If we make the form with a very large number of elements, adjust in it the conditional design, then on the average user computer we works slower. Therefore, you should not make the form with 20 tabs as before. You must try to simplify them somehow, divide into the individual forms, disable the functionality by parts, etc.

Thirdly, the transfer of large amount of data. This is not so critical, it is important only for low connection speed. But, though, it is better to take care about that. If there are some periodically used data, it is better not to place them in the form data. Because they can be not required in some scenarios of usage. It is better to receive them from the server if it is necessary. Also, it is possible to cache them in the client module with the further reuse.

Fourthly, the use of key word Val when declaring the parameters of procedures and functions. The fact is that in the client-server interaction this key word does not mean that during the work inside a single computer, client or server. When we use Val when declaring the parameter of server procedure and call it from the client, it means that the value of this parameter will not return to the client. If we do not set Val, and it is so in standard case, then the following things happen. Suppose we call the server procedure and pass it an array. Suppose we are even going to use this array on the client. It was just a parameter and we do not need it anymore indeed. But when the server call will finish, the array will packed in XML or JSON (on the web-client) and will return to the client. It is clear that this is entirely ineffective. Therefore, if you do not need the returned value passed through the parameter, write the key word Val for such parameters. Of course, if the parameter is Boolean, Val can be saved and do not write. But in fact this is not good.

Long operations

The long server calls deserve special mention. If we make a client-server call and it does not work out in a reasonable time, it leads to various problems. What time is acceptable depends on various conditions.

For example, on the service http://www.accountingsuite.com/ the client will be closed after 75 seconds of waiting. Because the web-server will decide that 1С:Enterprise server does not respond and it is not necessary to wait for reply from it. As a result, the client will see an error that the application does not work anymore. For the users of Macintosh computers everything is even worse. They use the standard browser Safari, and in the code of browser a time out of 8 seconds is directly “sewn”. If the server call did not happen in 8 seconds, that is all, the application works no longer. In general, this is not very good when we make a long server call and the program meanwhile “is down”, but the user cannot do anything.

TO resolve this situation, it is recommended to use the mechanism of long operations of SSL. It works quite simply. The functionality that we want to execute on the server is called inside background job. The waiting handler is connected on the client which checks from time to time whether the server reply appeared. Thus, the problem is solved and you get even an additional bonus that the client application may work at this time, it “is not down”.

Careful use of resources

Main memory

One of the most important and valuable resources is the main memory. We all remember little about it, especially in modern times. When there are 16GB of memory in the desktop computers, why do we need to save memory?

In fact, it should be saved. It is still not enough on the server, no matter how much memory is available. When a few hundred of users work on the server, any ineffective memory use can be very bad for its operation. If there are thousands of users, it will affect dramatically.

Therefore, when we write some algorithms, it is required to proceed from the fact that the main memory is limited. If the volume of data with which we are going to work is not limited by itself, it should be limited artificially. Use the cursor selections similar to those that are used in the dynamic list, etc.

It is recommended to treat especially carefully the generation of large data structure in the memory. For example, there is ability in the script to process the files as a whole. The text documents in TextDocument, XML in DOMDocument or HTML in HTMLDocument. These are incorrect ways to work with files, because in this case the entire file is loaded in the main memory, much service information is created, but the file can be very big. In practice, these methods are necessary in very rare cases, when an arbitrary access is required to the file content, to some certain part of file. But in most cases, the practical tasks consist of processing the whole of file. And to d this, it is required to use the successive write and successive read: XMLReader, TextReader, XMLWriter, TextWriter. These methods read the files by portions and consume memory economically.

Memory leaks

Another problem associated with memory is the memory leaks. They occur not so often, but create many problems. It is complicated to diagnose them in practice, even in spite of the fact that in the technological log there is a special tool to track them. It is very simple to create in the script a memory leak. Make a circular reference – and the memory is gone. An example of such reference is shown in the figure.
Data = New Structure;
Data.Insert("Key", Data);

Of course, in the “live” code exactly such example cannot be found, but, unfortunately, it is possible to create accidently the similar construction. For example, if we have the objects that have other embedded objects, and somewhere deep down they refer to the topmost object. As a result, the circular reference is created.

What will happen in the described situation? When no external references (in variables, attributes, etc.) to these objects will remain, the object will not be still deleted and it will remain in the memory. Because the memory is constructed on the reference counter, and when we access to the object, that is, assign a reference to it, the internal reference counter increases. When the reference goes out the visibility scope or is explicitly broken, the reference counter decreases. If the reference counter will never become equal to zero, the memory will never be released.

It is clear that an advice “do not write such code” is enough complicated to follow. So, it is required to pay attention to this feature when you create some structure in the memory. Otherwise, it will be very hard to diagnose it later. The leaks, unfortunately, are detected not during debug or testing, but already in the finished product, when out of memory is on the server.

Reuse of returned values

Another mechanism which may consume extra memory is the modules with reuse of returned values. Analysis of configurations shows that the developers slightly abuse this ability and evaluate it incorrectly. Reuse of returned values is in fact a cache which is not “free” by itself. Therefore, we must be sure that we place there the “necessary things”.

For example, it is possible to create general modules with repeated usage from which the string constants are returned. But this is pointless. Because getting a string constant will work each time much faster than getting it from the general module with the repeated usage. As the same time, it make sense perfectly to return data received from database.

Another point. If we want to place something in cache, we must be sure that we will access often to it later. Cache does not store the data forever. In general, the value will be deleted from cache in 20 minutes after calculation or in 6 minutes after the last use. Depending on what will come first. In addition, the value will be deleted in case of the lack of memory in the working server process, during the working process restart and when switching the client to another working process. Therefore, if we did not “have time” to use data from cache, then the cache resources were wasted.

What strange things can be also related to these modules? For example, inappropriate parameters obtained on the input. The range of values obtained on the input should not be wide. In the configurations there are the functions that receive on the input a contractor, for example. This can be ineffective. Suppose there are very much contractors in the base. And the scenario of user work is so that the probability that someone will access to this contractor in 5 minutes is very small. And if this is so, then the resources will be wasted again. And if this probably not very big “waste” multiply by the number of concurrent users, the useless charge of resources become significant.

The last feature associated with this mechanism is that the cache returns each time not an object copy, but the reference to the same object in memory. It is very easy to make a mistake and accidently change this object after getting. This case was in practice. During each call a new value added in the array which was returned by the function with repeated usage. As a result, it “swelled” very quickly when conducting the documents. Therefore, it is very desired to use as the returned values the values whose states cannot be changed. For example, FixedArray, FixedStructure. This will help to avoid such errors.

Work with temporary files

Another point associated with resources is a correct work with temporary files. To create temporary files, you should use the names received using the function GetTempFileName(). The platform can delete the files with such names by itself after completing the process of 1С:Enterprise which created them. It is clear that the server reboot or restart of working processes may happen rarely, but in any case this is better than if the platform had never insured the developer and never deleted temporary files. It is obviously that such “service” does not negate at all the fact that the developer must manually delete the temporary files immediately after its usage. Otherwise, they can be stored on the server for a long time which in the end will lead to the exhaustion of disk space and server termination.

Use of non-shared data

Special attention should be given for non-shared data. If we have an ability to bring some part of data to the non-shared ones, this allows getting at once several advantages. However, it should be done carefully and thoughtfully. The use of non-shared data improves the solution effectiveness, but increases its complexity. Because the maintenance of non-shared data is a complex task.

Non-shared data are stored in a single instance that does not seem to be a significant advantage by itself. Big deal, the database began to take a little less space. But it is very important here that these data must be usually updated. Because we placed them there for a good reason.

What does happen during the shared data update, for example, of some classifier? During its update, quite a lot of time will be spent to update the base. It will be necessary to enter each area and change in it the data of this classifier. It can take hours for a live shared really operating base. Therefore, it is required to place in the shared data not all, but only those data which must be really shared.

Very often it is possible to divide data in to the shared and non-shared. For example, in the next version of SSL the variant of reports were optimized in such way. The supplied variants are put in the non-shared data and the custom variants are left shared. Due to this, the update speed has significantly increased. Because the “multiplier” has disappeared for the data area and the update began to work much faster. However, optimizing the shared data, it is required to remember that the data entered by the user cannot be stored in the non-shared data in any way. Because the non-shared data can be available for any users. It should be said that the writing of non-shared data in the shared session is very dangerous operation. In principle, for now it is technically impossible, because in the SSL there is a restriction to write non-shared data, but it would be advisable to remember about it.

Optimization of script code

What else can assist in the careful use of resources? For example, careful approach to write the script code. Even a small non-optimality made in the code may form a tens of delay seconds, if such code is execute multiple times in the loop with the tens of thousands iterations. It is required to pay particularly close attention to such parts of code. The typical example: if it is necessary to use in the loop some calculated value, it should be obtained and saved in the variable. Instead of calculating it again at each iteration of the loop.

Minimization of time for information base version update

It is important to update the information base version. Because all this time the base is unavailable for the users. A significant part of work is done here by the SSL, however, the developers almost in every version write in addition their own attached update handlers.

What could be wrong with these handlers? For each handlers can be specified the configuration version and it will be executed only for this version. Or, instead of version number, a symbol “*” can be specified (so called handlers “with asterisk”). So, they are executed each time, regardless of the configuration version. And if we write some handler “with asterisk” and it is meanwhile a shared, then it will be executed every version change, even if we fixed in the configuration only a couple of code lines. Of course, such handlers should not be written. And an execution of such handlers in the SSL is denied.

In general, any handlers of IB update, even those that are not “with asterisk”, must be maximally optimized. The following optimization scheme is possible here. A non-shared update handler is created which stores the data required for it in the non-shared data. When the version is changed, it analyses has anything changed or not. And if something has changed, it runs the shared handlers. And if not, it does nothing. In principle, it turns out that such handler also performs some actions when the version is changed, but this is not so bad, since they are executed in the non-shared session only once. And not ten thousand times for the base.

Scheduled jobs

The last point to be considered subject to the resources is the use of scheduled jobs. And the recommendation will concern not only the shared bases, but any bases at all.

When we create the schedule of scheduled job, do not make it very small, with a small start interval. In practice, it leads to the problems even in the non-shared bases. If we take 10 bases, set one cluster, make there a schedule which is executed once a minute, this will already lead to very significant problems. In fact, it is better to avoid this, because nothing good is in that.

For example, we have a cluster, nobody uses a half of its bases. Once a minute the session arises, raises all the caches existing in the platform, raises the configuration in memory, executes some poor query and puts everything back. And it is so every minute. It turns out that the resources are wasted for no special reason.

You can try to consolidate the scheduled jobs to make them fewer. For example, it is done so in the queue of jobs in SSL for the shared bases.

In general, the problem with scheduled jobs for the shared bases is more significant than for the “regular” ones. Here there is the simplest example. We have a scheduled job “text extraction for full-text indexing”. It has the schedule “execute every 85 seconds”. It checks whether there are the changes to be processed by it. If they are, it extracts these data, runs the text extraction and puts them back in the base. The most part of time it does nothing useful, because there are no changes. Especially in any base of type 1С:Accounting in which there are not so many places to put the text. But, nevertheless, this task is necessary. What if some user will place the word file in the base and it should be indexed.

If the job was initially left a shared, then, probably, the server has already terminated. A few thousands of sessions would be launched every 85 seconds and the server would terminate at this time. suppose we passed it to the queue of jobs as it is. And every 85 seconds, but already subsequently, not simultaneously, it would be executed in all existing data areas. When the measures were checked, it turned out that this job which was executed by itself, for example, in 100 milliseconds, consumed more than 100 percent in total of one core per day. Although, it seemed to be doing nothing.

In SSL the modification flag was set which is set when any object is changed and may require a full-text indexing. And there is a single job which is executed every 85 seconds. It does not enter in any areas, it simply looks at the flags. If the flag in some area is set, it plans for it the text extraction.

In this way the most part of tasks is solved. There are the tasks that are not so easily solved. For example, with e-mail. But you have to think of something more skilful there. Because e-mail should not only send the letters, but also receive them that is much worse. Since we have to receive somewhere from external addresses, and there nobody will write the modification flag by himself.

And one more reminder that it is denied to created predefined shared scheduled jobs. This will kill the server.

Work within platform architecture

Here we will talk about that the adherence to the principles laid down in the platform architecture is also important for the client-server applications.

Restriction to use external resources

The use of external resources is potentially dangerous. In release 8.3 there is such concept as cluster security profiles. They are appointed directly in the cluster of specific information base. As a result, all the external resources can be disabled. These are the server file system calls, launch of COM-objects, use of 1С:Enterprise external resources, launch of external processors and reports, launch of applications installed on the server and access to the Internet resources.

If configuration needs some external resources, then, firstly, it must be clearly specified in the documentation and, secondly, their calls must be localized. For example, the use of file system. You cannot just somewhere on the disk C: in the folder Temp place your files. Only in specially permitted server directory. Previously, this occurred in practice, when, for example, in the technological log we found somy strange files.

Work with server file system

Another point described in the standards but always forgotten. The cluster is called cluster, because multiple servers may exist in it. And not for any other reason. And if there are multiple server on our cluster, like in http://accountingsuite.com/, for example, then it is not good to try to save the file between client-server calls. During the next call we will not find this file with high probability. Because the may arrive on a different server.
So, if there is a need to save some data between server calls, they should be place in the temporary repository. Because even the temporary file which we saved will be on the server on which this call was executed. If the next call will be executed on another server, then, obviously, this file will no be there. This is a physically another computer.

Support of work with time zones

And at the end a little reminder about time zones. It is not recommended to use the current date. Because the function CurrentDate() called on the server will return the current date of server located, for example, in Moscow. It usually has nothing in common with the client who works, for example, in Irkutsk. The current client date is also quite doubtful thing, because the client may have on his computer invalid time and invalid date at all.

Therefore, it is better to use the function CurrentSessionDate(). In the server it will be a session of data area.

If this logic is not enough, you can implement your own in the application solution. We have not seen that in practice as yet. But, if desired, the session time zone can be switched.

Of course, in some cases it is possible to use the current date of client computer, but it must be a something very specific, like reminder that associated directly with computer.

Click to rate this post!
[Total: 0 Average: 0]

Leave a Reply

Your email address will not be published. Required fields are marked *