<div dir="ltr"><div dir="ltr"><div dir="ltr"><div>Hi Björn,</div><div><br></div><div>Thanks.</div><div><br></div><div>The two experiments are for the same algorithm, what changes is the resource sharing protocol. The workload is the same for both experiments.</div><div><br></div><div>The suspicion I had is that in one of the experiments data is being collected from only 1 of the processors, but I have not yet confirmed this suspicion. I think I noticed this at some point when running ft-dump for one of the protocols, but this was some time ago. As soon as possible, I'm going to run the ft-dump again for both cases.</div><div><br></div><div>I will also check the event codes used when saving and reading the tracers for overhead, maybe something has been changed by the precursor works.</div><div><br></div><div>There are no error messages for recording the tracers at the end of the experiments.</div><div><br></div><div>Best regards,</div><div><br></div><div>Ricardo</div></div></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">Em seg, 13 de mai de 2019 às 07:01, <<a href="mailto:litmus-dev-request@lists.litmus-rt.org">litmus-dev-request@lists.litmus-rt.org</a>> escreveu:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Send litmus-dev mailing list submissions to<br>
<a href="mailto:litmus-dev@lists.litmus-rt.org" target="_blank">litmus-dev@lists.litmus-rt.org</a><br>
<br>
To subscribe or unsubscribe via the World Wide Web, visit<br>
<a href="https://lists.litmus-rt.org/listinfo/litmus-dev" rel="noreferrer" target="_blank">https://lists.litmus-rt.org/listinfo/litmus-dev</a><br>
or, via email, send a message with subject or body 'help' to<br>
<a href="mailto:litmus-dev-request@lists.litmus-rt.org" target="_blank">litmus-dev-request@lists.litmus-rt.org</a><br>
<br>
You can reach the person managing the list at<br>
<a href="mailto:litmus-dev-owner@lists.litmus-rt.org" target="_blank">litmus-dev-owner@lists.litmus-rt.org</a><br>
<br>
When replying, please edit your Subject line so it is more specific<br>
than "Re: Contents of litmus-dev digest..."<br>
<br>
<br>
Today's Topics:<br>
<br>
1. Re: Schedule and Locking Overhead information (Björn Brandenburg)<br>
<br>
<br>
----------------------------------------------------------------------<br>
<br>
Message: 1<br>
Date: Sun, 12 May 2019 12:25:28 +0200<br>
From: Björn Brandenburg <<a href="mailto:bbb@mpi-sws.org" target="_blank">bbb@mpi-sws.org</a>><br>
To: <a href="mailto:litmus-dev@lists.litmus-rt.org" target="_blank">litmus-dev@lists.litmus-rt.org</a><br>
Subject: Re: [LITMUS^RT] Schedule and Locking Overhead information<br>
Message-ID: <<a href="mailto:4BC7E4D9-76B5-446C-A4AA-5EE704826FF5@mpi-sws.org" target="_blank">4BC7E4D9-76B5-446C-A4AA-5EE704826FF5@mpi-sws.org</a>><br>
Content-Type: text/plain; charset="utf-8"<br>
<br>
> On 11. May 2019, at 18:19, Ricardo Teixeira <<a href="mailto:ricardo.btxr@gmail.com" target="_blank">ricardo.btxr@gmail.com</a>> wrote:<br>
> <br>
> I executed 800 experiments for an out of stock protocol, each experiment runned for 30 seconds. During the experiments, I got overhead information using the scripts disponible at github (throught the feather trace tools). <br>
> <br>
> I also ran the same set of experiments for another out of stock protocol and got the same overhead information. Both protocols generates nearly the same amount of jobs. <br>
> <br>
> Analysing the collected data, I realized that the second protocol generated about 3 times more samples than the first, for all types of data collected, for example: CXS, LOCK, RELEASE-LATENCY, RELEASE, SCHED and SCHED2. <br>
<br>
Hi Ricardo,<br>
<br>
some variation in the number of scheduling decisions and context switches can be expected, especially if you are comparing different scheduling and/or locking policies, but release-related events are determined solely by the workloads, so if you are running the same workload under both setups, it shouldn’t differ by a factor of three. <br>
<br>
> <br>
> Another problem was that both protocols gererated a reasonable amount of LOCK samples, but an insignificant amount of UNLOCK samples. <br>
<br>
That’s very strange — the operations should obviously be symmetric and hence produce an equal number of samples. <br>
<br>
> For example: for an subset of 100 experiments, there was 62k LOCK samples for the first protocol and 177k for the second, there was 0 UNLOCK samples for the first protocol and 6 for the second. That situation was present in all of the 8 subsets, where the amount of UNLOCK samples was never above 10 for both protocols. <br>
> <br>
> Through the tracers, I known that the LOCK and UNLOCK operations were happening as expected and I not changed the code which saves the overhead tracers. <br>
> <br>
> Could anyone help me to understand what is happening? <br>
<br>
Did you see any messages about failed writes to the trace buffers? <br>
<br>
- Björn<br>
<br>
-------------- next part --------------<br>
A non-text attachment was scrubbed...<br>
Name: smime.p7s<br>
Type: application/pkcs7-signature<br>
Size: 5061 bytes<br>
Desc: not available<br>
URL: <<a href="http://lists.litmus-rt.org/pipermail/litmus-dev/attachments/20190512/f5abda71/attachment-0001.bin" rel="noreferrer" target="_blank">http://lists.litmus-rt.org/pipermail/litmus-dev/attachments/20190512/f5abda71/attachment-0001.bin</a>><br>
<br>
------------------------------<br>
<br>
Subject: Digest Footer<br>
<br>
_______________________________________________<br>
litmus-dev mailing list<br>
<a href="mailto:litmus-dev@lists.litmus-rt.org" target="_blank">litmus-dev@lists.litmus-rt.org</a><br>
<a href="https://lists.litmus-rt.org/listinfo/litmus-dev" rel="noreferrer" target="_blank">https://lists.litmus-rt.org/listinfo/litmus-dev</a><br>
<br>
<br>
------------------------------<br>
<br>
End of litmus-dev Digest, Vol 81, Issue 2<br>
*****************************************<br>
</blockquote></div>