From unjc.email at gmail.com Wed Jul 3 21:14:47 2013 From: unjc.email at gmail.com (unjc email) Date: Wed, 3 Jul 2013 14:14:47 -0700 Subject: Persistent Connections Message-ID: Hello there, I have a question about enabling persistent connections in workload. The following is how I setup the server and robot in the pg file; as shown, pconn_use_lmt is set to 1000. Server S1 = { kind = "S101"; contents = [ JpgContent: 73.73%, HtmlContent: 11.45%, SwfContent: 13.05%, FlvContent: 0.06%, Mp3Content: 0.01%, cntOther ]; direct_access = contents; addresses = [ '25.57.0.10:9090', '25.57.0.11:9090' ]; // where to create these server agents http_versions = [ "1.0" ]; // newer agents use HTTP/1.1 by default pconn_use_lmt = const(1000); // Persistent connections - should tune this value }; // Note that this Robot has an undefined request-rate in order to enable a // best-effort workload. Robot R = { kind = "R101"; pop_model = { pop_distr = popUnif(); }; recurrence = 50%; req_rate = undef(); origins = S1.addresses; // where the origin servers are addresses = robotAddrs(authAddrScheme, theBench); pconn_use_lmt = const(1000); // Persistent connections - should tune this value open_conn_lmt = 1; // maximum concurrent connections http_versions = [ "1.0" ]; // newer agents use HTTP/1.1 by default }; I examine the tcp streams of tcpdump output from the client machine, in a single-robot test; although "connection: keep-alive" are found in both request and respond headers, I see client issue [FIN, ACK]'s every few (<10) requests, that is way before 1000 requests they make. GET /w1b7335ec.2b642c95:00000008/t03/_0000413f.jpg HTTP/1.0 Accept: */* Host: 25.57.0.10:9090 X-Xact: 1b7335ec.2b642c95:00000002 1b7335ec.2b642c95:00020522 0 X-Loc-World: 1b7335ec.2b642c95:00000008 -1/16703 8351 X-Rem-World: 1b7335ec.2b642c95:00000008 -1/16703 8351 X-Target: 25.57.0.10:9090 X-Abort: -324104509 -1205953971 X-Phase-Sync-Pos: 0 Connection: keep-alive HTTP/1.0 200 OK Cache-Control: private,no-cache Pragma: no-cache Date: Wed, 03 Jul 2013 19:50:22 GMT Connection: keep-alive Content-Length: 9479 Content-Type: image/jpeg X-Target: 25.57.0.10:9090 X-Xact: 1b7335e7.5e615116:00000002 1b7335ec.2b642c95:7ffdfadd 0 X-Rem-World: 1b7335ef.4e10652d:00000008 -1/15998 7999 X-Abort: 2013317368 2072661844 X-Phase-Sync-Pos: 0 I also found the robot machine runs out of ephemeral ports shortly after the starrt of the single-robot test. The ulimit value of the machine is 65536. I am surprised to see this if the persistent connections are being used. FYI, this is non-proxy test. 003.04| EphPortMgr.cc:23: error: 4096/8191 (s98) Address already in use 003.04| OS probably ran out of ephemeral ports at 25.57.100.2:0 003.04| Client.cc:347: error: 4096/8192 (c63) failed to establish a connection 003.04| 25.57.100.2 failed to connect to 25.57.0.11:9090 Would you please kindly advise what I might configure wrong? Thanks, Jacky -------------- next part -------------- An HTML attachment was scrubbed... URL: From dmitry.kurochkin at measurement-factory.com Wed Jul 3 22:29:21 2013 From: dmitry.kurochkin at measurement-factory.com (Dmitry Kurochkin) Date: Thu, 04 Jul 2013 02:29:21 +0400 Subject: Persistent Connections In-Reply-To: References: Message-ID: <87obajs2f2.fsf@gmail.com> Hi Jacky. unjc email writes: > Hello there, > > I have a question about enabling persistent connections in workload. The > following is how I setup the server and robot in the pg file; as shown, > pconn_use_lmt is set to 1000. > > > Server S1 = { > kind = "S101"; > contents = [ JpgContent: 73.73%, HtmlContent: 11.45%, SwfContent: > 13.05%, FlvContent: 0.06%, Mp3Content: 0.01%, cntOther ]; > direct_access = contents; > addresses = [ '25.57.0.10:9090', '25.57.0.11:9090' ]; // where to > create these server agents > http_versions = [ "1.0" ]; // newer agents use HTTP/1.1 by default > pconn_use_lmt = const(1000); // Persistent connections - should tune > this value > }; > > > // Note that this Robot has an undefined request-rate in order to enable a > // best-effort workload. > Robot R = { > > kind = "R101"; > pop_model = { pop_distr = popUnif(); }; > recurrence = 50%; > req_rate = undef(); > origins = S1.addresses; // where the origin servers are > > addresses = robotAddrs(authAddrScheme, theBench); > pconn_use_lmt = const(1000); // Persistent connections - should tune > this value > open_conn_lmt = 1; // maximum concurrent connections > http_versions = [ "1.0" ]; // newer agents use HTTP/1.1 by default > }; > > > I examine the tcp streams of tcpdump output from the client machine, in a > single-robot test; although "connection: keep-alive" are found in both > request and respond headers, I see client issue [FIN, ACK]'s every few > (<10) requests, that is way before 1000 requests they make. > > > GET /w1b7335ec.2b642c95:00000008/t03/_0000413f.jpg HTTP/1.0 > Accept: */* > Host: 25.57.0.10:9090 > X-Xact: 1b7335ec.2b642c95:00000002 1b7335ec.2b642c95:00020522 0 > X-Loc-World: 1b7335ec.2b642c95:00000008 -1/16703 8351 > X-Rem-World: 1b7335ec.2b642c95:00000008 -1/16703 8351 > X-Target: 25.57.0.10:9090 > X-Abort: -324104509 -1205953971 > X-Phase-Sync-Pos: 0 > Connection: keep-alive > > HTTP/1.0 200 OK > Cache-Control: private,no-cache > Pragma: no-cache > Date: Wed, 03 Jul 2013 19:50:22 GMT > Connection: keep-alive > Content-Length: 9479 > Content-Type: image/jpeg > X-Target: 25.57.0.10:9090 > X-Xact: 1b7335e7.5e615116:00000002 1b7335ec.2b642c95:7ffdfadd 0 > X-Rem-World: 1b7335ef.4e10652d:00000008 -1/15998 7999 > X-Abort: 2013317368 2072661844 > X-Phase-Sync-Pos: 0 > You set the maximum number of open connections per Robot to 1. This means that every time a Robot needs to make a request to a server different from the previous request, it has to close the existing idle persistent connection. The persistent connection can only be reused if the next request happens to target the same server as the previous one. Otherwise the existing connection has to be closed to honor the open_conn_lmt setting. This would work in case there is just one origin server or a proxy is used. For two servers and no proxy you should set open_conn_lmt to 2 at least. Or just leave it unset, a Robot should not use more than 2 connections anyway. > > I also found the robot machine runs out of ephemeral ports shortly after > the starrt of the single-robot test. The ulimit value of the machine > is 65536. I am surprised to see this if the persistent connections are > being used. FYI, this is non-proxy test. > As described above, persistent connections are effectively disabled (i.e. rarely reused) in your workload. Clients have to close connections and open new ones at a high rate. Hence ephemeral ports run out. > 003.04| EphPortMgr.cc:23: error: 4096/8191 (s98) Address already in use > 003.04| OS probably ran out of ephemeral ports at 25.57.100.2:0 > 003.04| Client.cc:347: error: 4096/8192 (c63) failed to establish a > connection > 003.04| 25.57.100.2 failed to connect to 25.57.0.11:9090 > > > > Would you please kindly advise what I might configure wrong? > I hope the above helps. Regards, Dmitry > > > Thanks, > Jacky > _______________________________________________ > Users mailing list > Users at web-polygraph.org > http://www.web-polygraph.org/mailman/listinfo/users From unjc.email at gmail.com Thu Jul 4 13:33:59 2013 From: unjc.email at gmail.com (unjc email) Date: Thu, 4 Jul 2013 06:33:59 -0700 Subject: Persistent Connections In-Reply-To: <87obajs2f2.fsf@gmail.com> References: <87obajs2f2.fsf@gmail.com> Message-ID: It works great after I comment out "open_conn_lmt ". Thanks a lot Dmitry. Cheers, Jacky On Wed, Jul 3, 2013 at 3:29 PM, Dmitry Kurochkin < dmitry.kurochkin at measurement-factory.com> wrote: > Hi Jacky. > > unjc email writes: > > > Hello there, > > > > I have a question about enabling persistent connections in workload. The > > following is how I setup the server and robot in the pg file; as shown, > > pconn_use_lmt is set to 1000. > > > > > > Server S1 = { > > kind = "S101"; > > contents = [ JpgContent: 73.73%, HtmlContent: 11.45%, > SwfContent: > > 13.05%, FlvContent: 0.06%, Mp3Content: 0.01%, cntOther ]; > > direct_access = contents; > > addresses = [ '25.57.0.10:9090', '25.57.0.11:9090' ]; // where to > > create these server agents > > http_versions = [ "1.0" ]; // newer agents use HTTP/1.1 by default > > pconn_use_lmt = const(1000); // Persistent connections - should tune > > this value > > }; > > > > > > // Note that this Robot has an undefined request-rate in order to enable > a > > // best-effort workload. > > Robot R = { > > > > kind = "R101"; > > pop_model = { pop_distr = popUnif(); }; > > recurrence = 50%; > > req_rate = undef(); > > origins = S1.addresses; // where the origin servers are > > > > addresses = robotAddrs(authAddrScheme, theBench); > > pconn_use_lmt = const(1000); // Persistent connections - should tune > > this value > > open_conn_lmt = 1; // maximum concurrent connections > > http_versions = [ "1.0" ]; // newer agents use HTTP/1.1 by default > > }; > > > > > > I examine the tcp streams of tcpdump output from the client machine, in a > > single-robot test; although "connection: keep-alive" are found in both > > request and respond headers, I see client issue [FIN, ACK]'s every few > > (<10) requests, that is way before 1000 requests they make. > > > > > > GET /w1b7335ec.2b642c95:00000008/t03/_0000413f.jpg HTTP/1.0 > > Accept: */* > > Host: 25.57.0.10:9090 > > X-Xact: 1b7335ec.2b642c95:00000002 1b7335ec.2b642c95:00020522 0 > > X-Loc-World: 1b7335ec.2b642c95:00000008 -1/16703 8351 > > X-Rem-World: 1b7335ec.2b642c95:00000008 -1/16703 8351 > > X-Target: 25.57.0.10:9090 > > X-Abort: -324104509 -1205953971 > > X-Phase-Sync-Pos: 0 > > Connection: keep-alive > > > > HTTP/1.0 200 OK > > Cache-Control: private,no-cache > > Pragma: no-cache > > Date: Wed, 03 Jul 2013 19:50:22 GMT > > Connection: keep-alive > > Content-Length: 9479 > > Content-Type: image/jpeg > > X-Target: 25.57.0.10:9090 > > X-Xact: 1b7335e7.5e615116:00000002 1b7335ec.2b642c95:7ffdfadd 0 > > X-Rem-World: 1b7335ef.4e10652d:00000008 -1/15998 7999 > > X-Abort: 2013317368 2072661844 > > X-Phase-Sync-Pos: 0 > > > > You set the maximum number of open connections per Robot to 1. This > means that every time a Robot needs to make a request to a server > different from the previous request, it has to close the existing > idle persistent connection. The persistent connection can only be > reused if the next request happens to target the same server as the > previous one. Otherwise the existing connection has to be closed to > honor the open_conn_lmt setting. > > This would work in case there is just one origin server or a proxy is > used. For two servers and no proxy you should set open_conn_lmt to 2 at > least. Or just leave it unset, a Robot should not use more than 2 > connections anyway. > > > > > I also found the robot machine runs out of ephemeral ports shortly after > > the starrt of the single-robot test. The ulimit value of the machine > > is 65536. I am surprised to see this if the persistent connections are > > being used. FYI, this is non-proxy test. > > > > As described above, persistent connections are effectively disabled > (i.e. rarely reused) in your workload. Clients have to close > connections and open new ones at a high rate. Hence ephemeral ports run > out. > > > 003.04| EphPortMgr.cc:23: error: 4096/8191 (s98) Address already in use > > 003.04| OS probably ran out of ephemeral ports at 25.57.100.2:0 > > 003.04| Client.cc:347: error: 4096/8192 (c63) failed to establish a > > connection > > 003.04| 25.57.100.2 failed to connect to 25.57.0.11:9090 > > > > > > > > Would you please kindly advise what I might configure wrong? > > > > I hope the above helps. > > Regards, > Dmitry > > > > > > > Thanks, > > Jacky > > _______________________________________________ > > Users mailing list > > Users at web-polygraph.org > > http://www.web-polygraph.org/mailman/listinfo/users > -------------- next part -------------- An HTML attachment was scrubbed... URL: From unjc.email at gmail.com Thu Jul 4 14:48:33 2013 From: unjc.email at gmail.com (unjc email) Date: Thu, 4 Jul 2013 07:48:33 -0700 Subject: Multipart/form-data POST Request Message-ID: Hello, I wonder if Webpolygraph support multipart/form-data POST requests. If so, is there an example showing how the workload is configured? Thanks, Jacky -------------- next part -------------- An HTML attachment was scrubbed... URL: From dmitry.kurochkin at measurement-factory.com Sat Jul 6 18:44:36 2013 From: dmitry.kurochkin at measurement-factory.com (Dmitry Kurochkin) Date: Sat, 06 Jul 2013 22:44:36 +0400 Subject: Multipart/form-data POST Request In-Reply-To: References: Message-ID: <87d2qvsf3f.fsf@gmail.com> Hi Jacky. unjc email writes: > Hello, > > I wonder if Webpolygraph support multipart/form-data POST requests. If so, > is there an example showing how the workload is configured? > Polygraph supports POST requests and you can configure request Content-Type and body properties using the PGL Content type (same as for replies). For details please read HTTP POST/PUT request bodies user manual at [1]. As for multipart/form-data in requests, you can configure Content-Type request header using Content.mime PGL field as described above. Polygraph does not support generation of multipart/form-data body. And usually proxies do not care about body content. Still, if you need to generate multipart/form-data (or any other) body content, you should use CDB feature. Please see Realistic content simulation user manual [2] for more info. Regards, Dmitry [1] http://www.web-polygraph.org/docs/userman/req_bodies.html [2] http://www.web-polygraph.org/docs/userman/csm/ > > > Thanks, > Jacky > _______________________________________________ > Users mailing list > Users at web-polygraph.org > http://www.web-polygraph.org/mailman/listinfo/users From unjc.email at gmail.com Thu Jul 11 17:35:37 2013 From: unjc.email at gmail.com (unjc email) Date: Thu, 11 Jul 2013 13:35:37 -0400 Subject: Multipart/form-data POST Request In-Reply-To: <87d2qvsf3f.fsf@gmail.com> References: <87d2qvsf3f.fsf@gmail.com> Message-ID: Thanks Dmitry. I have spent some time to massage the request-body files and put them in cdb. Things work pretty well with the mime content-type and CSM. Thanks, Jacky On Sat, Jul 6, 2013 at 2:44 PM, Dmitry Kurochkin < dmitry.kurochkin at measurement-factory.com> wrote: > Hi Jacky. > > unjc email writes: > > > Hello, > > > > I wonder if Webpolygraph support multipart/form-data POST requests. If > so, > > is there an example showing how the workload is configured? > > > > Polygraph supports POST requests and you can configure request > Content-Type and body properties using the PGL Content type (same as for > replies). For details please read HTTP POST/PUT request bodies user > manual at [1]. > > As for multipart/form-data in requests, you can configure Content-Type > request header using Content.mime PGL field as described above. > Polygraph does not support generation of multipart/form-data body. And > usually proxies do not care about body content. Still, if you need to > generate multipart/form-data (or any other) body content, you should use > CDB feature. Please see Realistic content simulation user manual [2] > for more info. > > Regards, > Dmitry > > [1] http://www.web-polygraph.org/docs/userman/req_bodies.html > [2] http://www.web-polygraph.org/docs/userman/csm/ > > > > > > > Thanks, > > Jacky > > _______________________________________________ > > Users mailing list > > Users at web-polygraph.org > > http://www.web-polygraph.org/mailman/listinfo/users > -------------- next part -------------- An HTML attachment was scrubbed... URL: From unjc.email at gmail.com Fri Jul 12 14:33:22 2013 From: unjc.email at gmail.com (unjc email) Date: Fri, 12 Jul 2013 10:33:22 -0400 Subject: GET Request with Query String Message-ID: Hello, Does Webpolygraph support GET request with query-string, something like "/w1b7ec234.08157e44:00000008/t03/_0000002b.jpg?name=abc"? Thanks, Jacky -------------- next part -------------- An HTML attachment was scrubbed... URL: From dmitry.kurochkin at measurement-factory.com Fri Jul 12 14:45:40 2013 From: dmitry.kurochkin at measurement-factory.com (Dmitry Kurochkin) Date: Fri, 12 Jul 2013 18:45:40 +0400 Subject: GET Request with Query String In-Reply-To: References: Message-ID: <87li5brg4r.fsf@gmail.com> Hi Jacky. unjc email writes: > Hello, > > Does Webpolygraph support GET request with query-string, something like > "/w1b7ec234.08157e44:00000008/t03/_0000002b.jpg?name=abc"? > You can use foreign traces to produce URLs with queries. Please see Trace replay user manual at [1] for details. Regards, Dmitry [1] http://www.web-polygraph.org/docs/userman/replay.html > > > > Thanks, > Jacky > _______________________________________________ > Users mailing list > Users at web-polygraph.org > http://www.web-polygraph.org/mailman/listinfo/users From unjc.email at gmail.com Fri Jul 12 14:57:22 2013 From: unjc.email at gmail.com (unjc email) Date: Fri, 12 Jul 2013 10:57:22 -0400 Subject: GET Request with Query String In-Reply-To: <87li5brg4r.fsf@gmail.com> References: <87li5brg4r.fsf@gmail.com> Message-ID: Thanks again Dmitry. Am I able to extract transaction stats by Robot (say I setup different robots for different types of traffic) from binary logs via ltrace? I believe the stats are merged and averaged in the console log, right? Thanks, Jacky On Fri, Jul 12, 2013 at 10:45 AM, Dmitry Kurochkin < dmitry.kurochkin at measurement-factory.com> wrote: > Hi Jacky. > > unjc email writes: > > > Hello, > > > > Does Webpolygraph support GET request with query-string, something like > > "/w1b7ec234.08157e44:00000008/t03/_0000002b.jpg?name=abc"? > > > > You can use foreign traces to produce URLs with queries. Please see > Trace replay user manual at [1] for details. > > Regards, > Dmitry > > [1] http://www.web-polygraph.org/docs/userman/replay.html > > > > > > > > > Thanks, > > Jacky > > _______________________________________________ > > Users mailing list > > Users at web-polygraph.org > > http://www.web-polygraph.org/mailman/listinfo/users > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dmitry.kurochkin at measurement-factory.com Fri Jul 12 15:41:37 2013 From: dmitry.kurochkin at measurement-factory.com (Dmitry Kurochkin) Date: Fri, 12 Jul 2013 19:41:37 +0400 Subject: GET Request with Query String In-Reply-To: References: <87li5brg4r.fsf@gmail.com> Message-ID: <87ip0frdji.fsf@gmail.com> unjc email writes: > Thanks again Dmitry. > > Am I able to extract transaction stats by Robot (say I setup different > robots for different types of traffic) from binary logs via ltrace? Stats from all Robots are merged (i.e. POST request stats contain data from all Robots running in a Polygraph process). Different Robot types may produce different non-overlapping stats though (e.g. one Robot may produce only GET request, while another one only POST, in this case, GET and POST stats would effectively represent a single Robot type). As a work around, you may run different Robot types in different Polygraph processes. Each process would produce a separate binary log with stats from Robots that run in this process. > I > believe the stats are merged and averaged in the console log, right? > Right. Regards, Dmitry > > > > Thanks, > Jacky > > > On Fri, Jul 12, 2013 at 10:45 AM, Dmitry Kurochkin < > dmitry.kurochkin at measurement-factory.com> wrote: > >> Hi Jacky. >> >> unjc email writes: >> >> > Hello, >> > >> > Does Webpolygraph support GET request with query-string, something like >> > "/w1b7ec234.08157e44:00000008/t03/_0000002b.jpg?name=abc"? >> > >> >> You can use foreign traces to produce URLs with queries. Please see >> Trace replay user manual at [1] for details. >> >> Regards, >> Dmitry >> >> [1] http://www.web-polygraph.org/docs/userman/replay.html >> >> > >> > >> > >> > Thanks, >> > Jacky >> > _______________________________________________ >> > Users mailing list >> > Users at web-polygraph.org >> > http://www.web-polygraph.org/mailman/listinfo/users >> From unjc.email at gmail.com Fri Jul 12 16:04:50 2013 From: unjc.email at gmail.com (unjc email) Date: Fri, 12 Jul 2013 12:04:50 -0400 Subject: GET Request with Query String In-Reply-To: <87ip0frdji.fsf@gmail.com> References: <87li5brg4r.fsf@gmail.com> <87ip0frdji.fsf@gmail.com> Message-ID: Thanks for your quick reply. I have thought of this workaround. The challenge is to keep the desired traffic ratio (e.g. 40% HTTP GET, 40% HTTP POST, 15% HTTPS GET, 5% HTTPS POST) in the ramping load test using best-effort robots. Thanks, Jacky On Fri, Jul 12, 2013 at 11:41 AM, Dmitry Kurochkin < dmitry.kurochkin at measurement-factory.com> wrote: > unjc email writes: > > > Thanks again Dmitry. > > > > Am I able to extract transaction stats by Robot (say I setup different > > robots for different types of traffic) from binary logs via ltrace? > > Stats from all Robots are merged (i.e. POST request stats contain data > from all Robots running in a Polygraph process). Different Robot types > may produce different non-overlapping stats though (e.g. one Robot may > produce only GET request, while another one only POST, in this case, GET > and POST stats would effectively represent a single Robot type). > > As a work around, you may run different Robot types in different > Polygraph processes. Each process would produce a separate binary log > with stats from Robots that run in this process. > > > I > > believe the stats are merged and averaged in the console log, right? > > > > Right. > > Regards, > Dmitry > > > > > > > > > Thanks, > > Jacky > > > > > > On Fri, Jul 12, 2013 at 10:45 AM, Dmitry Kurochkin < > > dmitry.kurochkin at measurement-factory.com> wrote: > > > >> Hi Jacky. > >> > >> unjc email writes: > >> > >> > Hello, > >> > > >> > Does Webpolygraph support GET request with query-string, something > like > >> > "/w1b7ec234.08157e44:00000008/t03/_0000002b.jpg?name=abc"? > >> > > >> > >> You can use foreign traces to produce URLs with queries. Please see > >> Trace replay user manual at [1] for details. > >> > >> Regards, > >> Dmitry > >> > >> [1] http://www.web-polygraph.org/docs/userman/replay.html > >> > >> > > >> > > >> > > >> > Thanks, > >> > Jacky > >> > _______________________________________________ > >> > Users mailing list > >> > Users at web-polygraph.org > >> > http://www.web-polygraph.org/mailman/listinfo/users > >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rousskov at measurement-factory.com Sat Jul 13 02:53:38 2013 From: rousskov at measurement-factory.com (Alex Rousskov) Date: Fri, 12 Jul 2013 20:53:38 -0600 (MDT) Subject: GET Request with Query String In-Reply-To: <87li5brg4r.fsf@gmail.com> References: <87li5brg4r.fsf@gmail.com> Message-ID: On Fri, 12 Jul 2013, Dmitry Kurochkin wrote: > unjc email writes: >> >> Does Webpolygraph support GET request with query-string, something like >> "/w1b7ec234.08157e44:00000008/t03/_0000002b.jpg?name=abc"? > You can use foreign traces to produce URLs with queries. Please see > Trace replay user manual at [1] for details. > [1] http://www.web-polygraph.org/docs/userman/replay.html Also, if you want query strings in Polygraph-generated URLs, I suspect it is possible to add them (or any other custom URL suffix) using PGL Content::mime field and its "extensions" feature: http://www.web-polygraph.org/docs/reference/pgl/types.html#type:docs/reference/pgl/types/Mime In the following example, the feature is used to add ".html" and ".htm" suffixes, but you can also add ".jpg?foo=bar" or a similar suffix: http://www.web-polygraph.org/docs/reference/models/traffic.html#_p_37 I have not tested that trick, but if it does not work, then it is a bug that we should fix. HTH, Alex. From fry at open.ch Mon Jul 15 18:15:35 2013 From: fry at open.ch (Franck Youssef) Date: Mon, 15 Jul 2013 20:15:35 +0200 Subject: Understanding the timestamp of ltrace Message-ID: <8465_1373912141_51E43C4D_8465_5420_1_C37EB891-A210-47B4-A0AB-2F02EC8FFBB9@open.ch> Hello, I am trying to generate log traces of err_xact.count myself, using the polygraph-ltrace --objects. I would like to sample these values with a sampling interval of 20 seconds. To my understanding, that should be feasible with the --time_unit option. However, when running ltrace with different --time_unit values] I always obtain the same output. Furthermore, when printing the 'time' object using --time_unit 1s, I obtain some kind of a funny timestamp prefixed with a dash: "-2147483620.50" (= Fri Dec 13 21:46:19 MET 1901 using date -d @?). What does that timestamp mean, how could I obtain a "natural" unix timestamp and how could I modify the sampling interval? Best, Franck -- franck youssef junior engineer open systems ag raeffelstrasse 29 ch-8045 zurich t: +41 58 100 10 10 f: +41 58 100 10 11 fry at open.ch http://www.open.ch From fry at open.ch Wed Jul 17 16:43:21 2013 From: fry at open.ch (Franck Youssef) Date: Wed, 17 Jul 2013 18:43:21 +0200 Subject: Understanding the timestamp of ltrace In-Reply-To: <51E6BF5B.8060708@tut.by> References: <8465_1373912141_51E43C4D_8465_5420_1_C37EB891-A210-47B4-A0AB-2F02EC8FFBB9@open.ch> <51E6BF5B.8060708@tut.by> Message-ID: <8465_1374079407_51E6C9AF_8465_8902_1_303E3A30-9BE1-4445-8DBB-CF11E936DCF8@open.ch> Hi Pavel, >> Furthermore, when printing the 'time' object using --time_unit 1s, I obtain some kind of a funny timestamp prefixed with a dash: "-2147483620.50" (= Fri Dec 13 21:46:19 MET 1901 using date -d @?). > > This seems like a bug. What polygraph version do you use? I am using the latest public stable release (a.k.a. v. 4.3.2). Using polygraph-lr, the timing is correct. However, they are false with polygraph-ltrace. > You could modify sampling interval using '--win_len' option. Do not use '--time_unit' option if you want unix timestamps. > e.g. > $ polygraph-ltrace --objects time, err_xact.count --win_len 20sec References: <8465_1373912141_51E43C4D_8465_5420_1_C37EB891-A210-47B4-A0AB-2F02EC8FFBB9@open.ch> Message-ID: <51E6BF5B.8060708@tut.by> Hi Franck, Please find some answers in-lined: On 07/15/2013 09:15 PM, Franck Youssef wrote: > Hello, > > I am trying to generate log traces of err_xact.count myself, using the polygraph-ltrace --objects. > > I would like to sample these values with a sampling interval of 20 seconds. > > To my understanding, that should be feasible with the --time_unit option. However, when running ltrace with different --time_unit values] I always obtain the same output. > Furthermore, when printing the 'time' object using --time_unit 1s, I obtain some kind of a funny timestamp prefixed with a dash: "-2147483620.50" (= Fri Dec 13 21:46:19 MET 1901 using date -d @?). This seems like a bug. What polygraph version do you use? > What does that timestamp mean, how could I obtain a "natural" unix timestamp and how could I modify the sampling interval? You could modify sampling interval using '--win_len' option. Do not use '--time_unit' option if you want unix timestamps. e.g. $ polygraph-ltrace --objects time, err_xact.count --win_len 20sec Best, > Franck > Best wishes, Pavel From panya_qwert at tut.by Wed Jul 17 17:24:15 2013 From: panya_qwert at tut.by (Pavel Kazlenka) Date: Wed, 17 Jul 2013 20:24:15 +0300 Subject: Understanding the timestamp of ltrace In-Reply-To: <8465_1374079407_51E6C9AF_8465_8902_1_303E3A30-9BE1-4445-8DBB-CF11E936DCF8@open.ch> References: <8465_1373912141_51E43C4D_8465_5420_1_C37EB891-A210-47B4-A0AB-2F02EC8FFBB9@open.ch> <51E6BF5B.8060708@tut.by> <8465_1374079407_51E6C9AF_8465_8902_1_303E3A30-9BE1-4445-8DBB-CF11E936DCF8@open.ch> Message-ID: <51E6D33F.9060201@tut.by> Hi Franck, Sorry for obvious action, but could you check unix file permissions/owners for log files? If you e.g. scp'ed server logs to client machine there is a chance that user running polygraph-ltrace really have no permission to read *.log file. Best wishes, Pavel On 07/17/2013 07:43 PM, Franck Youssef wrote: > Hi Pavel, > >>> Furthermore, when printing the 'time' object using --time_unit 1s, I obtain some kind of a funny timestamp prefixed with a dash: "-2147483620.50" (= Fri Dec 13 21:46:19 MET 1901 using date -d @?). >> This seems like a bug. What polygraph version do you use? > I am using the latest public stable release (a.k.a. v. 4.3.2). > Using polygraph-lr, the timing is correct. However, they are false with polygraph-ltrace. > >> You could modify sampling interval using '--win_len' option. Do not use '--time_unit' option if you want unix timestamps. >> e.g. >> $ polygraph-ltrace --objects time, err_xact.count --win_len 20sec Thanks! This works like a charm! > > Furthermore, I experience issues when inspecting logs from multiple clients and servers hosts. > When running > $ polygraph-ltrace --object err_xact.count --side all *.log > > I obtain the following errors: > server1.log:warning: failed to read log file, skipping > server2.log:warning: failed to read log file, skipping > ? for all the servers logs > followed by the "normal" ltrace output based on the clients logs only. > > Whey running the same command on the server logs only, the trace is correct and no warnings are issued. It works also correctly with client logs only. > However, when starting to mix client and server logs together, I obtain again warnings on STDERR. > > The error happens on any --side all|clt|srv and --sync_times 0|1 combination. > Also possibly related, when using polygraph-reporter on the same logs, the polygraph-reporter app gets killed after passing to server-side plots. The crash does not happen with client logs only. > > Is that also a bug, or am I misunderstanding the logging mechanism? > > > Thank you a lot for your explanations. > > Cheers, > > Franck From dmitry.kurochkin at measurement-factory.com Wed Jul 17 22:29:59 2013 From: dmitry.kurochkin at measurement-factory.com (Dmitry Kurochkin) Date: Thu, 18 Jul 2013 02:29:59 +0400 Subject: Understanding the timestamp of ltrace In-Reply-To: <51E6D33F.9060201@tut.by> References: <8465_1373912141_51E43C4D_8465_5420_1_C37EB891-A210-47B4-A0AB-2F02EC8FFBB9@open.ch> <51E6BF5B.8060708@tut.by> <8465_1374079407_51E6C9AF_8465_8902_1_303E3A30-9BE1-4445-8DBB-CF11E936DCF8@open.ch> <51E6D33F.9060201@tut.by> Message-ID: <87a9lkrfa0.fsf@gmail.com> Hi Frank, Pavel. Pavel Kazlenka writes: > Hi Franck, > > Sorry for obvious action, but could you check unix file > permissions/owners for log files? If you e.g. scp'ed server logs to > client machine there is a chance that user running polygraph-ltrace > really have no permission to read *.log file. > It does not seem like a permission problem since the same command works for server side only logs. > Best wishes, > Pavel > > On 07/17/2013 07:43 PM, Franck Youssef wrote: >> Hi Pavel, >> >>>> Furthermore, when printing the 'time' object using --time_unit 1s, I obtain some kind of a funny timestamp prefixed with a dash: "-2147483620.50" (= Fri Dec 13 21:46:19 MET 1901 using date -d @?). >>> This seems like a bug. What polygraph version do you use? >> I am using the latest public stable release (a.k.a. v. 4.3.2). >> Using polygraph-lr, the timing is correct. However, they are false with polygraph-ltrace. >> >>> You could modify sampling interval using '--win_len' option. Do not use '--time_unit' option if you want unix timestamps. >>> e.g. >>> $ polygraph-ltrace --objects time, err_xact.count --win_len 20sec > Thanks! This works like a charm! >> >> Furthermore, I experience issues when inspecting logs from multiple clients and servers hosts. >> When running >> $ polygraph-ltrace --object err_xact.count --side all *.log >> >> I obtain the following errors: >> server1.log:warning: failed to read log file, skipping >> server2.log:warning: failed to read log file, skipping >> ? for all the servers logs >> followed by the "normal" ltrace output based on the clients logs only. >> >> Whey running the same command on the server logs only, the trace is correct and no warnings are issued. It works also correctly with client logs only. >> However, when starting to mix client and server logs together, I obtain again warnings on STDERR. >> >> The error happens on any --side all|clt|srv and --sync_times 0|1 combination. >> Also possibly related, when using polygraph-reporter on the same logs, the polygraph-reporter app gets killed after passing to server-side plots. The crash does not happen with client logs only. >> >> Is that also a bug, or am I misunderstanding the logging mechanism? >> This is definitely a bug, reporter should never crash. Do all client and server hosts use the same workload? Can you please provide a backtrace for the reporter crash (dump core, run "gdb polygraph-reporter core" command, run "bt" command)? Or even better, would it be possible for you to give us the binary logs (privately)? That would make it much easier for us to triage the bug. Regards, Dmitry >> >> Thank you a lot for your explanations. >> >> Cheers, >> >> Franck > > _______________________________________________ > Users mailing list > Users at web-polygraph.org > http://www.web-polygraph.org/mailman/listinfo/users From panya_qwert at tut.by Thu Jul 18 07:52:23 2013 From: panya_qwert at tut.by (Pavel Kazlenka) Date: Thu, 18 Jul 2013 10:52:23 +0300 Subject: Understanding the timestamp of ltrace In-Reply-To: <87a9lkrfa0.fsf@gmail.com> References: <8465_1373912141_51E43C4D_8465_5420_1_C37EB891-A210-47B4-A0AB-2F02EC8FFBB9@open.ch> <51E6BF5B.8060708@tut.by> <8465_1374079407_51E6C9AF_8465_8902_1_303E3A30-9BE1-4445-8DBB-CF11E936DCF8@open.ch> <51E6D33F.9060201@tut.by> <87a9lkrfa0.fsf@gmail.com> Message-ID: <51E79EB7.7050606@tut.by> Hi, On 07/18/2013 01:29 AM, Dmitry Kurochkin wrote: > Hi Frank, Pavel. > > Pavel Kazlenka writes: > >> Hi Franck, >> >> Sorry for obvious action, but could you check unix file >> permissions/owners for log files? If you e.g. scp'ed server logs to >> client machine there is a chance that user running polygraph-ltrace >> really have no permission to read *.log file. >> > It does not seem like a permission problem since the same command works > for server side only logs. Agree. I performed several tests. Seems like polygraph-ltrace is unable to process server and client logs at the same run. Modern polygraph versions print something like this: polygraph/bin/polygraph-ltrace --side all --objects time,ok_xact.count last/*.log warning: log(s) have info from client and server sides, and no specific side was specified; assuming `clt' side last/srv.1.log:warning: failed to read log file, skipping So either '--side all' doesn't work correctly (if '--side all' is designed to be used for logs that have infor from both sides, not for logs that have info from any but one side), or whole polygraph-ltrace works incorrectly, assuming that there will be one-side logs in input. Anyway, this is minor bug. >> Best wishes, >> Pavel >> >> On 07/17/2013 07:43 PM, Franck Youssef wrote: >>> Hi Pavel, >>> >>>>> Furthermore, when printing the 'time' object using --time_unit 1s, I obtain some kind of a funny timestamp prefixed with a dash: "-2147483620.50" (= Fri Dec 13 21:46:19 MET 1901 using date -d @?). >>>> This seems like a bug. What polygraph version do you use? >>> I am using the latest public stable release (a.k.a. v. 4.3.2). >>> Using polygraph-lr, the timing is correct. However, they are false with polygraph-ltrace. >>> >>>> You could modify sampling interval using '--win_len' option. Do not use '--time_unit' option if you want unix timestamps. >>>> e.g. >>>> $ polygraph-ltrace --objects time, err_xact.count --win_len 20sec >> Thanks! This works like a charm! >>> >>> Furthermore, I experience issues when inspecting logs from multiple clients and servers hosts. >>> When running >>> $ polygraph-ltrace --object err_xact.count --side all *.log >>> >>> I obtain the following errors: >>> server1.log:warning: failed to read log file, skipping >>> server2.log:warning: failed to read log file, skipping >>> ? for all the servers logs >>> followed by the "normal" ltrace output based on the clients logs only. >>> >>> Whey running the same command on the server logs only, the trace is correct and no warnings are issued. It works also correctly with client logs only. >>> However, when starting to mix client and server logs together, I obtain again warnings on STDERR. >>> >>> The error happens on any --side all|clt|srv and --sync_times 0|1 combination. >>> Also possibly related, when using polygraph-reporter on the same logs, the polygraph-reporter app gets killed after passing to server-side plots. The crash does not happen with client logs only. >>> >>> Is that also a bug, or am I misunderstanding the logging mechanism? >>> > This is definitely a bug, reporter should never crash. Do all client > and server hosts use the same workload? Can you please provide a > backtrace for the reporter crash (dump core, run "gdb polygraph-reporter > core" command, run "bt" command)? Or even better, would it be possible > for you to give us the binary logs (privately)? That would make it much > easier for us to triage the bug. > > Regards, > Dmitry > >>> Thank you a lot for your explanations. >>> >>> Cheers, >>> >>> Franck >> _______________________________________________ >> Users mailing list >> Users at web-polygraph.org >> http://www.web-polygraph.org/mailman/listinfo/users From jjk_saji at yahoo.com Wed Jul 24 07:55:25 2013 From: jjk_saji at yahoo.com (John Joseph) Date: Wed, 24 Jul 2013 00:55:25 -0700 (PDT) Subject: Hi , From new user Message-ID: <1374652525.77137.YahooMailNeo@web160905.mail.bf1.yahoo.com> Hi Thanks for the contribution for the? software I recently heard aboyt web-polygraph when I was trying to search for how to test squid server. I need to test my squid cache installation, I found out from google I could do it using web-polygraph, now I downloaded the latest stable release and tried to install on ubuntu 64 bit, I was able to run the configure command, but my make command failed. Did a search and found out that web-polygraph works fine with freeBSD, I plan to try it on the Free BSD Now I am searching for some document on how to start testing using web-polygraph (basics) , I would like to know do I need to install this on the squid server or can I have it my client Free BSD and and check it on the server from the client The docs at "http://www.web-polygraph.org/docs/userman/start.html" I am referring now, is there any simple howtodo guide? Guidance and Advice requested thanks Joseph John From jjk_saji at yahoo.com Wed Jul 24 08:55:14 2013 From: jjk_saji at yahoo.com (John Joseph) Date: Wed, 24 Jul 2013 01:55:14 -0700 (PDT) Subject: polysrv and polyclt not there after installing Message-ID: <1374656114.20666.YahooMailNeo@web160906.mail.bf1.yahoo.com> Hi All I am a novice for web-polygraph, I have installed the latest stable system on FreeBSD Now as per the docs "http://www.web-polygraph.org/docs/userman/simple.html".? I am trying to explore, but I was not able to see the command ?polysrv and polyclt I have other commands such as polygraph-aka???????????????? polygraph-lx????????? ??? ??? ??? ??? polygraph-polyprobe polygraph-beepmon???????? polygraph-pgl-test??? ??? ? ? ? ??? polygraph-polyrrd polygraph-cdb??????? ? ?????? polygraph-pgl2acl????????????????? polygraph-pop-test polygraph-client???? ? ??????? polygraph-pgl2eng? ? ? ? ? ? ?? ? polygraph-reporter polygraph-cmp-lx???????????? polygraph-pgl2ips???? ??? ??? ??? polygraph-rng-test polygraph-distr-test????? ??? polygraph-pgl2ldif??? ??? ??? ???? polygraph-server polygraph-dns-cfg??? ??????? polygraph-pmix2-ips?? ??? ??? ? polygraph-udp2tcpd polygraph-lr???????????????? ??? polygraph-pmix3-ips?? ??? ?????? polygraph-webaxe4-ips polygraph-ltrace???? ? ? ????? polygraph-polymon???? in the system, but not able to see? polysrv and polyclt From rousskov at measurement-factory.com Wed Jul 24 14:59:02 2013 From: rousskov at measurement-factory.com (Alex Rousskov) Date: Wed, 24 Jul 2013 08:59:02 -0600 Subject: polysrv and polyclt not there after installing In-Reply-To: <1374656114.20666.YahooMailNeo@web160906.mail.bf1.yahoo.com> References: <1374656114.20666.YahooMailNeo@web160906.mail.bf1.yahoo.com> Message-ID: <51EFEBB6.8060100@measurement-factory.com> On 07/24/2013 02:55 AM, John Joseph wrote: > I am a novice for web-polygraph, I have installed the latest stable system on FreeBSD > Now as per the docs > "http://www.web-polygraph.org/docs/userman/simple.html". I am trying > to explore, but I was not able to see the command > polysrv and polyclt > I have other commands such as > polygraph-aka polygraph-lx polygraph-polyprobe > polygraph-beepmon polygraph-pgl-test polygraph-polyrrd > polygraph-cdb polygraph-pgl2acl polygraph-pop-test > polygraph-client polygraph-pgl2eng polygraph-reporter > polygraph-cmp-lx polygraph-pgl2ips polygraph-rng-test > polygraph-distr-test polygraph-pgl2ldif polygraph-server > polygraph-dns-cfg polygraph-pmix2-ips polygraph-udp2tcpd > polygraph-lr polygraph-pmix3-ips polygraph-webaxe4-ips > polygraph-ltrace polygraph-polymon > in the system, but not able to see polysrv and polyclt Newer releases use binary names that follow packaging rules for various OSes, such as Debian. Polyclt became polygraph-client; polysrv became polygraph-server; etc. We will update the web site to reflect these renaming changes. Meanwhile, you should be able to guess the right name in most cases. HTH, Alex. From rousskov at measurement-factory.com Wed Jul 24 15:08:55 2013 From: rousskov at measurement-factory.com (Alex Rousskov) Date: Wed, 24 Jul 2013 09:08:55 -0600 Subject: Hi , From new user In-Reply-To: <1374652525.77137.YahooMailNeo@web160905.mail.bf1.yahoo.com> References: <1374652525.77137.YahooMailNeo@web160905.mail.bf1.yahoo.com> Message-ID: <51EFEE07.6040709@measurement-factory.com> On 07/24/2013 01:55 AM, John Joseph wrote: > I recently heard aboyt web-polygraph when I was trying to search for > how to test squid server. > I need to test my squid cache installation, I found out from google I > could do it using web-polygraph, now I downloaded the latest stable > release and tried to install on ubuntu 64 bit, I was able to run the > configure command, but my make command failed. Did a search and found > out that web-polygraph works fine with freeBSD, I plan to try it on > the Free BSD Polygraph runs OK on FreeBSD although most of the development (and use) happens on Linux these days. If you come across a build problem, try googling the error message for an existing solution and report new problems at https://bugs.launchpad.net/polygraph > Now I am searching for some document on how to start testing using > web-polygraph (basics) , I would like to know do I need to install > this on the squid server or can I have it my client Free BSD and and > check it on the server from the client Ideally, you should use at least three boxes to test a proxy: a client drone (running Polygraph robots), a DUT box (running your proxy), and a server drone (running Polygraph servers). If you are short on resources, you can cut corners at the expense of the quality of your test results. In the extreme case, everything can run on the same box. > The docs at "http://www.web-polygraph.org/docs/userman/start.html" I > am referring now, is there any simple howtodo guide? http://www.web-polygraph.org/docs/userman/simple.html is the next step. After that, the learning curve becomes steeper as you have to research standard workloads such as PolyMix-4 (documented but rather complex) and/or develop your own workloads (using documented PGL types and other Polygraph concepts as building blocks). Reading old cache-off reports (and studying standard workloads) may help you understand the overall methodology better as well. Good luck, Alex.