Eliot Miranda uploaded a new version of VMMaker to project VM Maker:
http://source.squeak.org/VMMaker/VMMaker.oscog-eem.2303.mcz
==================== Summary ====================
Name: VMMaker.oscog-eem.2303
Author: eem
Time: 30 December 2017, 4:43:05.961374 pm
UUID: 8cb7a9db-781c-4795-8cb7-ebd46abf0932
Ancestors: VMMaker.oscog-eem.2302
Simulation/Translation tweaks. Mark some simulation-only InterpreterPlugin methods as doNotGenerate. Slow down the simulated clock on the StackInterpreter (so that in simulation fewer tests time out). Provide an optional simulation-only primTraceLog for the StackInterpreter (which was used to debug the new 64-bit at:[put:] support).
=============== Diff against VMMaker.oscog-eem.2302 ===============
Item was changed:
----- Method: InterpreterPlugin>>cCoerce:to: (in category 'simulation') -----
cCoerce: value to: cType
+ <doNotGenerate>
"Type coercion for translation only; just return the value when running in Smalltalk.
This overrides the generic coercion method in VMClass. For some reason we are the exception.
If we want that style of coercion we can send cCoerce:to: to interpreterProxy, not self."
^value!
Item was changed:
----- Method: InterpreterPlugin>>cCoerceSimple:to: (in category 'simulation') -----
cCoerceSimple: value to: cType
+ <doNotGenerate>
"Coercion without type mapping. Don't even bother to check for valid types..."
^value!
Item was changed:
----- Method: InterpreterPlugin>>translatedPrimitiveArgument:ofType:using: (in category 'simulation') -----
translatedPrimitiveArgument: index ofType: cTypeString using: aCCodeGenerator
+ <doNotGenerate>
| oop unitSize |
oop := interpreterProxy stackValue: interpreterProxy methodArgumentCount - index.
(interpreterProxy isOopForwarded: oop) ifTrue: [^nil].
cTypeString last == $* ifTrue:
[unitSize := self sizeof: (aCCodeGenerator baseTypeForPointerType: cTypeString) asSymbol.
unitSize caseOf: {
[1] -> [(interpreterProxy isBytes: oop) ifFalse: [^nil]].
[2] -> [(interpreterProxy isShorts: oop) ifFalse: [^nil]].
[4] -> [(interpreterProxy isWords: oop) ifFalse: [^nil]].
[8] -> [(interpreterProxy isLong64s: oop) ifFalse: [^nil]] }
otherwise: [^nil].
^ObjectProxyForTranslatedPrimitiveSimulation new
interpreter: interpreterProxy
oop: oop
unitSize: unitSize].
((interpreterProxy isIntegerObject: oop)
and: [aCCodeGenerator isIntegralCType: cTypeString]) ifTrue:
[^interpreterProxy integerValueOf: oop].
self halt!
Item was changed:
StackInterpreterPrimitives subclass: #StackInterpreterSimulator
+ instanceVariableNames: 'parent bootstrapping byteCount breakCount sendCount lookupCount printSends printReturns traceOn myBitBlt displayForm fakeForm filesOpen imageName pluginList mappedPluginEntries quitBlock transcript displayView eventTransformer printFrameAtEachStep printBytecodeAtEachStep systemAttributes startMicroseconds lastYieldMicroseconds externalSemaphoreSignalRequests externalSemaphoreSignalResponses extSemTabSize atEachStepBlock disableBooleanCheat performFilters eventQueue assertVEPAES primTraceLog'
- instanceVariableNames: 'parent bootstrapping byteCount breakCount sendCount lookupCount printSends printReturns traceOn myBitBlt displayForm fakeForm filesOpen imageName pluginList mappedPluginEntries quitBlock transcript displayView eventTransformer printFrameAtEachStep printBytecodeAtEachStep systemAttributes startMicroseconds lastYieldMicroseconds externalSemaphoreSignalRequests externalSemaphoreSignalResponses extSemTabSize atEachStepBlock disableBooleanCheat performFilters eventQueue assertVEPAES'
classVariableNames: ''
poolDictionaries: ''
category: 'VMMaker-InterpreterSimulation'!
!StackInterpreterSimulator commentStamp: 'eem 9/3/2013 11:05' prior: 0!
This class defines basic memory access and primitive simulation so that the StackInterpreter can run simulated in the Squeak environment. It also defines a number of handy object viewing methods to facilitate pawing around in the object memory.
To see the thing actually run, you could (after backing up this image and changes), execute
(StackInterpreterSimulator new openOn: Smalltalk imageName) test
((StackInterpreterSimulator newWithOptions: #(NewspeakVM true MULTIPLEBYTECODESETS true))
openOn: 'ns101.image') test
and be patient both to wait for things to happen, and to accept various things that may go wrong depending on how large or unusual your image may be. We usually do this with a small and simple benchmark image.
Here's an example of what Eliot uses to launch the simulator in a window. The bottom-right window has a menu packed with useful stuff:
| vm |
vm := StackInterpreterSimulator newWithOptions: #().
vm openOn: '/Users/eliot/Squeak/Squeak4.4/trunk44.image'.
vm setBreakSelector: #&.
vm openAsMorph; run!
Item was changed:
----- Method: StackInterpreterSimulator>>ioUTCMicroseconds (in category 'I/O primitives support') -----
ioUTCMicroseconds
"Return the value of the microsecond clock."
"NOT. Actually, we want something a lot slower and, for exact debugging,
something more repeatable than real time. Dan had an idea: use the byteCount..."
+ ^(byteCount // 50) + startMicroseconds!
- ^byteCount + startMicroseconds
-
- "At 20k bytecodes per second, this gives us aobut 200 ticks per second, or about 1/5 of what you'd expect for the real time clock. This should still service events at one or two per second"!
Item was added:
+ ----- Method: StackInterpreterSimulator>>slowPrimitiveResponse (in category 'primitive support') -----
+ slowPrimitiveResponse
+ primTraceLog ifNotNil:
+ [primTraceLog size > 127 ifTrue:
+ [primTraceLog removeFirst].
+ primTraceLog addLast: primitiveFunctionPointer].
+ ^super slowPrimitiveResponse!
Item was added:
+ ----- Method: StackInterpreterSimulator>>turnOnPrimTraceLog (in category 'accessing') -----
+ turnOnPrimTraceLog
+ primTraceLog ifNil:
+ [primTraceLog := OrderedCollection new: 512]!
Hi all,
As I said earlier I am on my way through a student project (cc'ed) to
re-write/split/kill MiscPrimitive plugin. The plugin is composed of:
- 2 Bitmap primitives
- 1 Sound primitive.
- 6 ByteObject primitives
I will discuss here what is plan for each category of primitives. I would
like comments and Eliot's approval before moving forward with the student
project.
*2 Bitmap primitives*
Those are the compress and uncompress Bitmap primitives. Tim suggested that
we move the primitive to the BitBlt plugin (hence converting from
SmartSyntax to Slang). I think this is the right thing to do. Agreed ?
*1 Sound primitive*
This primitive can simply be moved to the SoundGenerator plugin.
SoundGenerator is also built using SmartSyntax. Agreed ?
*6 ByteObject primitives*
*1. translate: aString from: start to: stop table: table*
This primitive seems to be unused. I suggest we move it from a primitive to
plain Smalltalk code.
*2. stringHash: aString initialHash: speciesHash*
Since we now have hashMultiply as a primitive, the stringHash primitive is
now faster for very small strings with a Smalltalk version using
hashMultiply, and slower (2x) on medium to large strings. I suggest we move
it to plain Smalltalk code.
*3. findFirstInString, indexOfAscii, findSubString*
For these 3 primitives we can either add a ByteStringPlugin or add them as
numbered primitive.
*4. compare: string1 with: string2 collated: order*
This primitive is really important for performance, there are production
application spending a huge amount of times for string equality. I suggest
that we move this one to a numbered primitive that takes 2 or 3 parameters,
the order being optional, and to rewrite the version without order (i.e.
the version used by String>>#=, String>>#>, String>>#>=, etc.) in the JIT
(numbered primitives are required for that).
Alternatively we can add a numbered String>>#= numbered primitive and put
the compare primitive in a ByteStringPrimitive.
*5. String concatenation*
I would suggest to add in addition ByteString>>#, as a primitive. String
concatenation is really important performance wise and that will ease
deforestation (i.e. removing the allocation of temporary byte strings) in
the Sista JIT which is quite difficult right now.
*Conclusion*
The main question is do we want 1 numbered primitive and 3-5 primitives in
a small ByteStringPlugin, or do we want 3-5 numbered primitive. I don't
mind either.
I looked at the performance on the Sista VM, but there is still quite some
work to make all of those efficient and in production. Recently other VMs
(especially V8) decided to loose some peak performance in exchange for
better baseline performance since many applications they were running were
spending only 15-20% of the time in optimised code and 80-85% in baseline
code, so primitive performance is a big deal no matter what we have in the
future.
--
Clément Béra
https://clementbera.wordpress.com/
Bâtiment B 40, avenue Halley 59650 *Villeneuve d'Ascq*
Hi,
I wanted to set up a new Pharo 7 image for contributing. I used the description from here: https://github.com/pharo-project/pharo/wiki/Contribute-a-fix-to-Pharo
After I had successfully cloned the pharo repository Pharo crashed. I am afraid I cannot remember the last thing I did before the crash. Nevertheless, maybe the crash dump is of some use for someone being able to interpret its content.
I am on macOS Sierra 10.12.6.
./pharo --version
5.0 5.0.201708271955 Mac OS X built on Aug 27 2017 20:27:09 UTC Compiler: 4.2.1 Compatible Apple LLVM 6.1.0 (clang-602.0.53) [Production Spur VM]
CoInterpreter VMMaker.oscog-eem.2265 uuid: 76b62109-629a-4c39-9641-67b53321df9a Aug 27 2017
StackToRegisterMappingCogit VMMaker.oscog-eem.2262 uuid: 8b531242-de02-48aa-b418-8d2dde0bec6c Aug 27 2017
VM: 201708271955 https://github.com/OpenSmalltalk/opensmalltalk-vm.git $ Date: Sun Aug 27 21:55:26 2017 +0200 $
Plugins: 201708271955 https://github.com/OpenSmalltalk/opensmalltalk-vm.git $
Happy New Year!
Bernhard
Hi, (cross posting vm-dev)
nssm is nice - but requires additional tools.
Since the days of Squeak the windows VM had the
-service "ServiceName"
option and one was able to run Squeak as a windows service out of the box.
See http://wiki.squeak.org/squeak/105 for details.
This option allows to register/deregister with the windows service manager and run
a headless image.
I run a Squeak Wiki (Swiki/Comanche) since years with this and it is very nice and stable.
Unfortunately this is broken in recent Pharo VMs and so far Esteban or others did not
have the time to look into this issue. Would be really nice if this option could
be recovered in 2018. So one could easily deploy and run Seaside or Teapot/Tealight
or Zinc/WebClient based web services on Windows.
If we want to deploy Smalltalk based web applications or services on Windows we
should support that. It will keep Windows administrators happy and we would integrate with
the whole ecosystem (for instance you can start/stop a service using Windows scripting for
doing backups, etc.) right out of the box.
Bye
T.
Gesendet: Freitag, 29. Dezember 2017 um 11:36 Uhr
Von: "phil(a)highoctane.be" <phil(a)highoctane.be>
An: "Any question about pharo is welcome" <pharo-users(a)lists.pharo.org>
Betreff: Re: [Pharo-users] Running headless on Windows
If you want to run Pharo as a service, I have found nssm to be working well.
https://nssm.cc
Phil
On Dec 29, 2017 09:25, "Nicolai Hess" <nicolaihess@gmail.com[mailto:nicolaihess@gmail.com]> wrote:
2017-12-29 3:07 GMT+01:00 Andrei Stebakov <lispercat@gmail.com[mailto:lispercat@gmail.com]>:
Pierce, I tried all of those "no display" options, the result is the same
On Dec 28, 2017 8:37 PM, "Pierce Ng" <pierce@samadhiweb.com[mailto:pierce@samadhiweb.com]> wrote:On Wed, Dec 27, 2017 at 04:58:39PM +0100, Cyril Ferlicot D. wrote:
> On 12/27/2017 04:39 PM, Andrei Stebakov wrote:
> > When I run Pharo 6.1 with -- headless option on Windows, it executes the
> > eval command as expected but during the execution (which lasts 4 sec) it
> > opens the Pharo GUI.
> > Is it expected? I thought headless means that the whole execution would
> > happen in the background
>
> I think that currently Pharo does not have a "real" headless. But I
> heard there was work on that part for Pharo 7.
I know OP is talking about Windows... I've been running server applications on
Linux without X11 with -vm-display-null and in-image RFBServer for access to
Pharo over VNC. This works very well for me.
I believe "real" headless means GUI is not run at all and therefore does not
consume CPU cycles, which is very welcome. Meanwhile, maybe -vm-display-null
works on Windows for scripting purposes?
Pierce
Hi Andrei,
can you try this:
Open Pharo normal (no headless option).
Change the window size to "not-maximized" (eve if it is actually not maximized, maximize it ones and change it back to "not-maximized")
Save and quit the image.
After that, a call like
pharo --headless pharo.image eval "DateAndTime now"
will write the output to the stdout file, without opening a window.
> Hi,
>
> Indeed, I implemented the quick paths only for byte objects and pointer
> objects. The reason for this is that I based the implementation on customer
> benchmarks, and there was a lot of replacements for pointer objects
> (typically, growing the inner array of OrderedCollection/Dictionaries)
> and byte objects (typically bytestrings concatenation, such as 'a' , 'b'
> which calls twice this primitive for only a couple characters and is now
> much faster).
>
> I wrote the quick path with short copies mind, in which case switching to
> the C runtime is too expensive. Based on your analysis, for pointer objects
> it looks good, but for byte objects it gets slower.
>
> So I see two solutions for the slow down on byte copies:
> 1) over a threshold of 1000 elements, fall back to the slang primitive for
> byte objects
> 2) Generate smarter copying machine code (for example with some kind of
> duff device / unrolled loop / overlapping copies with 1 copy non aligned).
> If possible nothing requiring to extend the RTL. Note that memcopy in C is
> written as a per back-end files which is thousands of lines on x86 and we
> don't want to go anywhere near this complexity, in our case the size of
> generate machine code also matters and we want to keep it small. We could
> use pointer size read / writes on byte objects too if we deal with aliasing
> (if array == replArray just fall back to a slower path).
>
> Let's look at the naive machine code in Cog's RTL (I could show the actual
> assembly version if you like it better), in each case the loop does:
> - compare and jumpBelow to see if the copy is finished
> - read one field from repl
> - write the read field in the array
> - increment the 2 registers holding indexes for read/write
> - jump back.
>
> The copying machine code for byte object is as follow:
>
> instr := cogit CmpR: startReg R: stopReg.
> jumpFinished := cogit JumpBelow: 0.
> cogit MoveXbr: repStartReg R: replReg R: TempReg.
> cogit MoveR: TempReg Xbr: startReg R: arrayReg.
> cogit AddCq: 1 R: startReg.
> cogit AddCq: 1 R: repStartReg.
> cogit Jump: instr.
>
> The copying code for pointer object is as follow:
> instr := cogit CmpR: startReg R: stopReg.
> jumpFinished := cogit JumpBelow: 0.
> cogit MoveXwr: repStartReg R: replReg R: TempReg.
> cogit MoveR: TempReg Xwr: startReg R: arrayReg.
> cogit AddCq: 1 R: startReg.
> cogit AddCq: 1 R: repStartReg.
> cogit Jump: instr.
>
> Any idea what we could do smarter, but not too complex, without extending
> the RTL and without generating let's say more than 20 extra instructions ?
>
> I think checking for aliasing and using pointer size read/write may be the
> way to go. Then if performance is not better we just fall back to slang
> code.
>
> On Mon, Dec 25, 2017 at 10:57 PM, Levente Uzonyi <leves(a)caesar.elte.hu>
> wrote:
>
>> Hi Clémont,
>>
>> I finally found the time to write some benchmarks.
>> I compared the output of the script below on sqcogspur64linuxht vm
>> 201710061559 and 201712221331 avaiable on bintray.
>>
>> result := { ByteArray. DoubleByteArray. WordArray. DoubleWordArray.
>> ByteString. WideString. FloatArray. Array } collect: [ :class |
>> | collection |
>> Smalltalk garbageCollect.
>> collection := class basicNew: 10000.
>> class -> (#(0 1 2 5 10 20 50 100 200 500 1000 2000 5000 10000)
>> collect: [ :size |
>> | iterations time overhead |
>> iterations := (40000000 // (size max: 1) sqrt) floor.
>> overhead := [ 1 to: iterations do: [ :i | ] ] timeToRun.
>> time := [ 1 to: iterations do: [ :i |
>> collection replaceFrom: 1 to: size with:
>> collection startingAt: 1 ] ] timeToRun.
>> { size. iterations. time - overhead } ]) ].
>>
>> I found that the quick paths are probably only implented for bytes and
>> pointers collections, because there was no significant difference for
>> DoubleByteArray, WordArray, DoubleWordArray, WideString and FloatArray.
>
>
>> For pointers and bytes collections, there's significant speedup when the
>> copied portion is small. However, somewhere between 50 and 100 copied
>> elements, the copying of bytes collections becomes slower (up to 1.5x @
>> 100k elements) with the newer VM.
>> It's interesting that this doesn't happen to pointers classes. Instead of
>> slowdown there's still 1.5x speedup even at 100k elements.
>>
>> Levente
>>
>>
>> On Mon, 23 Oct 2017, Clément Bera wrote:
>>
>> Hi all,
>>> For a long time I was willing to add primitive
>>> #replaceFrom:to:with:startingAt: in the JIT but did not take time to do
>>> it. These days I am showing the JIT to one of my students and as an example
>>> of how one would write code in the JIT we implemented this primitive
>>> together, Spur-only. This is part of commit 2273.
>>>
>>> I implemented quick paths for byte objects and array-like objects only.
>>> The rationale behind this is that the most common cases I see in Pharo user
>>> benchmarks in the profiler is copy of arrays and byteStrings. Typically
>>> some application benchmarks would show 3-5% of
>>> time spent in copying small things, and switching from the JIT runtime
>>> to C runtime is an important part of the cost.
>>>
>>> First evaluation shows the following speed-ups, but I've just done that
>>> quickly in my machine:
>>>
>>> Copy of size 0
>>> Array 2.85x
>>> ByteString 2.7x
>>> Copy of size 1
>>> Array 2.1x
>>> ByteString 2x
>>> Copy of size 3
>>> Array 2x
>>> ByteString 1.9x
>>> Copy of size 8
>>> Array 1.8x
>>> ByteString 1.8x
>>> Copy of size 64
>>> Array 1.1x
>>> ByteString 1.1x
>>> Copy of size 1000
>>> Array 1x
>>> ByteString 1x
>>>
>>> So I would expect some macro benchmarks to get 1 to 3% percent speed-up.
>>> Not as much as I expected but it's there.
>>>
>>> Can someone who is good at benchmarks such as Levente have a look and
>>> provide us with a better evaluation of the performance difference ?
>>>
>>> Thanks.
>>>
>>> --
>>> Clément Béra
>>> https://clementbera.wordpress.com/
>>> Bâtiment B 40, avenue Halley 59650 Villeneuve d'Ascq
>>>
>>>
>
>
> --
> Clément Béra
> https://clementbera.wordpress.com/
> Bâtiment B 40, avenue Halley 59650 *Villeneuve d'Ascq*
>
On 27 December 2017 at 22:43, Esteban Lorenzano <notifications(a)github.com>
wrote:
> heh… I have to say: I do not contribute to osvm other than PRs (never
> directly), so I always expect having a green build to merge. Also, I always
> have first a green build in my own build process (who builds all sources
> from scratch and tests the resulting VM executing all tests in Pharo), so
> when I do the PR, I tested my sources.
> The problem we had with Pharo was because of two colliding issues (non
> related):
>
> - server destination for deploy changed, and then keys became immediately
> wrong
Sure, this is an anomaly that shouldn't occur too often. Its kind of an
act-of-god event.
But to remove such external dependency maybe each distribution's
"deployment"
could move to an isolated pipeline stage, so one distribution's deploy
infrastructure
can never affect the status of the build itself.
https://blog.travis-ci.com/2017-05-11-introducing-build-stages
> - travis updated his server and didn’t included by default anymore libcurl
> for i386
> this escapes the “you break it, you fix it”
I agree no contribution should be accepted if it does not passes through a
> PR (and CI needs to pass), but that’s actually hard to do with current
> process: today VMMaker monticello is the “development branch” and you
> cannot prepare PRs with that… you need branch before generate sources,
Actually you can create and change to a new branch after you generate
sources if you forget to do it before.
See accepted answer here...
https://stackoverflow.com/questions/2569459/git-create-a-branch-from-unstag…
> then commit into branch and then PR. I would recommend to use that
> approach but is more work than just commit so hard to adopt.
>
If one routinely uses branch-per-feature then generating sources from
VMMaker is just another-feature in a familiar workflow.
It balances the distribution a small additional effort per integration
against the work to recover from the occasional slip getting into main
branch.
I guess the main difficulty is changing workflow habits. In practice there
is some risk in changing workflows.
Anomalies can always pop up and its about finding the time to build
confidence in a new workflow.
btw, I think the community has gained a lot already in the move from svn to
git/github.
We'll get the next step sometime.
cheers -ben
> Esteban
>
> > On 27 Dec 2017, at 00:18, Ben Coman <notifications(a)github.com> wrote:
> >
> > A side-comment from the peanut gallery...
> >
> > On 16 November 2017 at 03:47, Fabio Niephaus <notifications(a)github.com>
> > wrote:
> >
> > > It is indeed. I was hoping we do "you break it, you fix it", but that
> > > didn't work apparently.
> > >
> > Taking the path of least resistance is human nature. There are varying
> > levels of "too busy to fix that right now." Introducing the merge-barrier
> > shifts that balance point to encourage the idealistic behaviour of fixing
> > errors asap.
> >
> > To mitigate concerns of delayed merges..
> > If these barriers sometimes get in the way of something critical, it
> should
> > be okay to temporarily disable them. But at least the default encourages
> > the ideal behaviour and bypassing that require explicit action rather
> than
> > happening accidentally.
> >
> > > We could force the Cog branch to always be green by only allowing
> changes
> > > that previously have been proven to pass, but then it takes longer to
> get
> > > things merged. Not sure if we want that...
> > >
> > Delayed merges are a fairly generic concern. It would be good to expose
> > some detail from everyone concerned that enabling the following will
> > impeded their workflow...
> >
> > https://help.github.com/assets/images/help/repository/
> protecting-branch-loose-status.png
> >
> > That is...
> > What period of delay are you concerned about?
> > How often do you you think this would be a problem?
> > Is the problem the delay in merging-code, or the delay in getting the new
> > binary to run?
> >
> > cheers -ben
> > —
> > You are receiving this because you were mentioned.
> > Reply to this email directly, view it on GitHub <https://github.com/
> OpenSmalltalk/opensmalltalk-vm/issues/172#issuecomment-354023920>, or
> mute the thread <https://github.com/notifications/unsubscribe-
> auth/AAfWXhk8Fm6tVkwyY5Q_BvaSHcwL6Wb2ks5tEX6_gaJpZM4QetkG>.
> >
>
> —
> You are receiving this because you commented.
> Reply to this email directly, view it on GitHub
> <https://github.com/OpenSmalltalk/opensmalltalk-vm/issues/172#issuecomment-3…>,
> or mute the thread
> <https://github.com/notifications/unsubscribe-auth/ABolJ9PrASAaRthpzh5Mj74RQ…>
> .
>
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/OpenSmalltalk/opensmalltalk-vm/issues/172#issuecomment-3…
heh… I have to say: I do not contribute to osvm other than PRs (never directly), so I always expect having a green build to merge. Also, I always have first a green build in my own build process (who builds all sources from scratch and tests the resulting VM executing all tests in Pharo), so when I do the PR, I tested my sources.
The problem we had with Pharo was because of two colliding issues (non related):
- server destination for deploy changed, and then keys became immediately wrong
- travis updated his server and didn’t included by default anymore libcurl for i386
this escapes the “you break it, you fix it”
I agree no contribution should be accepted if it does not passes through a PR (and CI needs to pass), but that’s actually hard to do with current process: today VMMaker monticello is the “development branch” and you cannot prepare PRs with that… you need branch before generate sources, then commit into branch and then PR. I would recommend to use that approach but is more work than just commit so hard to adopt.
Esteban
> On 27 Dec 2017, at 00:18, Ben Coman <notifications(a)github.com> wrote:
>
> A side-comment from the peanut gallery...
>
> On 16 November 2017 at 03:47, Fabio Niephaus <notifications(a)github.com>
> wrote:
>
> > It is indeed. I was hoping we do "you break it, you fix it", but that
> > didn't work apparently.
> >
> Taking the path of least resistance is human nature. There are varying
> levels of "too busy to fix that right now." Introducing the merge-barrier
> shifts that balance point to encourage the idealistic behaviour of fixing
> errors asap.
>
> To mitigate concerns of delayed merges..
> If these barriers sometimes get in the way of something critical, it should
> be okay to temporarily disable them. But at least the default encourages
> the ideal behaviour and bypassing that require explicit action rather than
> happening accidentally.
>
> > We could force the Cog branch to always be green by only allowing changes
> > that previously have been proven to pass, but then it takes longer to get
> > things merged. Not sure if we want that...
> >
> Delayed merges are a fairly generic concern. It would be good to expose
> some detail from everyone concerned that enabling the following will
> impeded their workflow...
>
> https://help.github.com/assets/images/help/repository/protecting-branch-loo…
>
> That is...
> What period of delay are you concerned about?
> How often do you you think this would be a problem?
> Is the problem the delay in merging-code, or the delay in getting the new
> binary to run?
>
> cheers -ben
> —
> You are receiving this because you were mentioned.
> Reply to this email directly, view it on GitHub <https://github.com/OpenSmalltalk/opensmalltalk-vm/issues/172#issuecomment-3…>, or mute the thread <https://github.com/notifications/unsubscribe-auth/AAfWXhk8Fm6tVkwyY5Q_BvaSH…>.
>
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/OpenSmalltalk/opensmalltalk-vm/issues/172#issuecomment-3…
A side-comment from the peanut gallery...
On 16 November 2017 at 03:47, Fabio Niephaus <notifications(a)github.com>
wrote:
> It is indeed. I was hoping we do "you break it, you fix it", but that
> didn't work apparently.
>
Taking the path of least resistance is human nature. There are varying
levels of "too busy to fix that right now." Introducing the merge-barrier
shifts that balance point to encourage the idealistic behaviour of fixing
errors asap.
To mitigate concerns of delayed merges..
If these barriers sometimes get in the way of something critical, it should
be okay to temporarily disable them. But at least the default encourages
the ideal behaviour and bypassing that require explicit action rather than
happening accidentally.
> We could force the Cog branch to always be green by only allowing changes
> that previously have been proven to pass, but then it takes longer to get
> things merged. Not sure if we want that...
>
Delayed merges are a fairly generic concern. It would be good to expose
some detail from everyone concerned that enabling the following will
impeded their workflow...
https://help.github.com/assets/images/help/repository/protecting-branch-loo…
That is...
What period of delay are you concerned about?
How often do you you think this would be a problem?
Is the problem the delay in merging-code, or the delay in getting the new
binary to run?
cheers -ben
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/OpenSmalltalk/opensmalltalk-vm/issues/172#issuecomment-3…
Hi @OpenSmalltalk-Bot!
Help us secure your GitHub account by verifying your email address (vm-dev(a)lists.squeakfoundation.org). This lets you access all of GitHub's features.
Click the link below to verify your email address:
https://github.com/users/OpenSmalltalk-Bot/emails/42764531/confirm_verifica…
You’re receiving this email because you recently created a new GitHub account or added a new email address. If this wasn’t you, please ignore this email.
---
Sent with <3 by GitHub.
GitHub, Inc. 88 Colin P Kelly Jr Street
San Francisco, CA 94107