How to create separated address spaces when running multiple client processes
Pavel Kazlenka
pavel.kazlenka at measurement-factory.com
Thu May 8 08:05:42 UTC 2014
Hi Jacky,
You don't need separate workloads for each client process.
Instead, you may
1) in a common for all clients (and, possibly, server) workloads put the
addresses of all clients:
/
Bench B = {
peak_req_rate = 300/sec;
client_side = {
max_agent_load = 1/sec; // estimated load produced by one Robot
addr_space = [ 'lo::10.0.1-10.1-250' ];
hosts = [ '172.16.0.1-3' ]; // three client-side hosts or partitions
};
server_side = { ... };
};
/2) Run each client with different -fake_hosts options, e.g. :
polygraph-client --config workload.pg --fake_hosts 10.0.1.1-5 <other
options>
polygraph-client --config workload.pg --fake_hosts 10.0.1.6-10 <other
options>
Please see details of fake_hosts option at
http://www.web-polygraph.org/docs/reference/options.html
Best wishes,
Pavel
On 05/05/2014 07:57 PM, unjc email wrote:
> Hello there,
>
> I run into a cpu-bound issue in the webpolygraph client machine. I
> want to run multiple client processes in the client machine. I want
> to know how to configure the robots so that each process has its own
> address spaces?
>
> For example,
>
> Client1 Robots: 10.0.1.1 to 10.0.1.5
> Client2 Robots: 10.0.1.6 to 10.0.1.10
> ...
> /
>
> Bench B = {
> peak_req_rate = 300/sec;
> client_side = {
> max_agent_load = 1/sec; // estimated load produced by one Robot
> addr_space = [ 'lo::10.0.1-5.1-250' ];
> hosts = [ '172.16.0.1-3' ]; // three client-side hosts or partitions
> };
> server_side = { ... };
> };
>
> /
>
>
> Do I have to prepare separated workloads (.pg files) for each client
> process in the same box?
>
>
> Thanks,
> Jacky
>
>
> _______________________________________________
> Users mailing list
> Users at web-polygraph.org
> http://www.web-polygraph.org/mailman/listinfo/users
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.web-polygraph.org/pipermail/users/attachments/20140508/32eb79f4/attachment.html>
More information about the Users
mailing list