[LITMUS^RT] litmus-dev Digest, Vol 81, Issue 2

Ricardo Teixeira ricardo.btxr at gmail.com
Mon May 13 15:06:09 CEST 2019


Hi Björn,

Thanks.

The two experiments are for the same algorithm, what changes is the
resource sharing protocol. The workload is the same for both experiments.

The suspicion I had is that in one of the experiments data is being
collected from only 1 of the processors, but I have not yet confirmed this
suspicion. I think I noticed this at some point when running ft-dump for
one of the protocols, but this was some time ago. As soon as possible, I'm
going to run the ft-dump again for both cases.

I will also check the event codes used when saving and reading the tracers
for overhead, maybe something has been changed by the precursor works.

There are no error messages for recording the tracers at the end of the
experiments.

Best regards,

Ricardo

Em seg, 13 de mai de 2019 às 07:01, <litmus-dev-request at lists.litmus-rt.org>
escreveu:

> Send litmus-dev mailing list submissions to
>         litmus-dev at lists.litmus-rt.org
>
> To subscribe or unsubscribe via the World Wide Web, visit
>         https://lists.litmus-rt.org/listinfo/litmus-dev
> or, via email, send a message with subject or body 'help' to
>         litmus-dev-request at lists.litmus-rt.org
>
> You can reach the person managing the list at
>         litmus-dev-owner at lists.litmus-rt.org
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of litmus-dev digest..."
>
>
> Today's Topics:
>
>    1. Re: Schedule and Locking Overhead information (Björn Brandenburg)
>
>
> ----------------------------------------------------------------------
>
> Message: 1
> Date: Sun, 12 May 2019 12:25:28 +0200
> From: Björn Brandenburg <bbb at mpi-sws.org>
> To: litmus-dev at lists.litmus-rt.org
> Subject: Re: [LITMUS^RT] Schedule and Locking Overhead information
> Message-ID: <4BC7E4D9-76B5-446C-A4AA-5EE704826FF5 at mpi-sws.org>
> Content-Type: text/plain; charset="utf-8"
>
> > On 11. May 2019, at 18:19, Ricardo Teixeira <ricardo.btxr at gmail.com>
> wrote:
> >
> > I executed 800 experiments for an out of stock protocol, each experiment
> runned for 30 seconds. During the experiments, I got overhead information
> using the scripts disponible at github (throught the feather trace tools).
> >
> > I also ran the same set of experiments for another out of stock protocol
> and got the same overhead information. Both protocols generates nearly the
> same amount of jobs.
> >
> > Analysing the collected data, I realized that the second protocol
> generated about 3 times more samples than the first, for all types of data
> collected, for example: CXS, LOCK, RELEASE-LATENCY, RELEASE, SCHED and
> SCHED2.
>
> Hi Ricardo,
>
> some variation in the number of scheduling decisions and context switches
> can be expected, especially if you are comparing different scheduling
> and/or locking policies, but release-related events are determined solely
> by the workloads, so if you are running the same workload under both
> setups, it shouldn’t differ by a factor of three.
>
> >
> > Another problem was that both protocols gererated a reasonable amount of
> LOCK samples, but an insignificant amount of UNLOCK samples.
>
> That’s very strange — the operations should obviously be symmetric and
> hence produce an equal number of samples.
>
> > For example: for an subset of 100 experiments, there was 62k LOCK
> samples for the first protocol and 177k for the second, there was 0 UNLOCK
> samples for the first protocol and 6 for the second. That situation was
> present in all of the 8 subsets, where the amount of UNLOCK samples was
> never above 10 for both protocols.
> >
> > Through the tracers, I known that the LOCK and UNLOCK operations were
> happening as expected and I not changed the code which saves the overhead
> tracers.
> >
> > Could anyone help me to understand what is happening?
>
> Did you see any messages about failed writes to the trace buffers?
>
> - Björn
>
> -------------- next part --------------
> A non-text attachment was scrubbed...
> Name: smime.p7s
> Type: application/pkcs7-signature
> Size: 5061 bytes
> Desc: not available
> URL: <
> http://lists.litmus-rt.org/pipermail/litmus-dev/attachments/20190512/f5abda71/attachment-0001.bin
> >
>
> ------------------------------
>
> Subject: Digest Footer
>
> _______________________________________________
> litmus-dev mailing list
> litmus-dev at lists.litmus-rt.org
> https://lists.litmus-rt.org/listinfo/litmus-dev
>
>
> ------------------------------
>
> End of litmus-dev Digest, Vol 81, Issue 2
> *****************************************
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.litmus-rt.org/pipermail/litmus-dev/attachments/20190513/0469805a/attachment.html>


More information about the litmus-dev mailing list