-
Notifications
You must be signed in to change notification settings - Fork 160
Fix 3 layer nested method calls with the return type includes a wildcard #4565
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
That's a great discovery, many thanks! |
| // 5. bullet: determine B4 from C | ||
| List<Set<InferenceVariable>> components = this.currentBounds.computeConnectedComponents(this.inferenceVariables); | ||
| while (!c.isEmpty()) { | ||
| List<Set<InferenceVariable>> components = this.currentBounds.computeConnectedComponents(this.inferenceVariables); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is the only change that I'd like to challenge at this point, because computeConnectedComponents() is not a trivial thing, so we may want to avoid unnecessary calls.
I see it is needed for GenericsRegressionTest_1_8.testBug499725 (and no other test in compiler.regression). Apparently, the initial computed components are insufficient because more inference variables (from inner inference?) are added within the loop.
I made a quick experiment:
- keep the original computation above the loop
- remember the number of inference variables
- inside the loop check if this number has grown
- only if yes, re-compute
components
- only if yes, re-compute
This sufficed to let compiler regression tests pass. WDYT?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I did think that the new bounds could also introduce connected components but after thinking about it the only existing inference variables that lambdas can have are in the parameters (which already have to be resolved for the lambda constraint to be reduced) and in the return type (where you would only be able to have distinct existing inference variables)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I must admit, that my "challenge" was of heuristic nature, not backed by strict reasoning. As you are doing the next step trying to prove correctness, I switched sides trying to provide experimental evidence against the optimization I suggested. 😈
Here is my observation regarding GenericsRegressionTest_1_8.testBug512156_3():
We are inferring this expression
this.stream(ts1)
.flatMap(t1 ->
stream(ts2).flatMap(t2 ->
stream(ts3).map(t3 ->
f.apply(t1, t2, t3)
)
)
)I found a situation where
- we are operating with these
components:[R#3, R#5],[R#7] - during this while-loop we are adding
Dependency R#7 <: R#5 - after this addition, recomputing
componentswould yield[R#3, R#5, R#7], i.e., both components are merged into one.
The addition of the new dependency happens in a somewhat strange call stack:
BoundSet.addBound(TypeBound, LookupEnvironment) line: 429
BoundSet.addBounds(TypeBound[], LookupEnvironment) line: 468
BoundSet.addBounds(BoundSet, LookupEnvironment) line: 477
InferenceContext18.lambda$0(InferenceContext18, BoundSet, boolean) line: 507
Lambda.run() line: not available
InferenceContext18.flushBoundOutbox() line: 524
ASTNode.resolvePolyExpressionArguments0(Invocation, MethodBinding, TypeBinding[], BlockScope) line: 782
ASTNode.resolvePolyExpressionArguments(Invocation, MethodBinding, TypeBinding[], BlockScope) line: 717
MessageSend.findMethodBinding(BlockScope) line: 1138
MessageSend.resolveType(BlockScope) line: 870
ReturnStatement.resolve(BlockScope) line: 367
LambdaExpression.resolveType(BlockScope, boolean) line: 467
LambdaExpression.cachedResolvedCopy(TypeBinding, boolean, boolean, InferenceContext18, Scope) line: 992
LambdaExpression.resolveExpressionExpecting(TypeBinding, Scope, InferenceContext18) line: 1022
ConstraintExpressionFormula.inputVariables(InferenceContext18) line: 521
InferenceContext18.findBottomSet(Set<ConstraintFormula>, Set<InferenceVariable>, List<Set<InferenceVariable>>) line: 1559
InferenceContext18.inferInvocationType(TypeBinding, InvocationSite, MethodBinding) line: 437
- see how things are triggered by computing
inputVariables() - it's the outer lambda being resolved:
t1 -> ... - this drives resolving
stream(ts2).flatMap(..)to conclusion InferenceContext18.flushBoundOutbox()is where the new dependency is added to the current bound set
Now, resolving a lambda to find input variables while the target type is a proper type(!) is probably useless: there are no inference variables to be harvested. This is an issue of its own which we may want to improve.
Still, if we avoid resolving the lambda during inputVariables() it would happen here:
LambdaExpression.resolveExpressionExpecting(TypeBinding, Scope, InferenceContext18) line: 1022
InferenceContext18.addLambdaConstraintsToC(LambdaExpression, Set<ConstraintFormula>, MethodBinding, TypeBinding) line: 673
InferenceContext18.inferInvocationType(TypeBinding, InvocationSite, MethodBinding) line: 472
This is indeed the call which the current PR defers from initial addConstraintsToC() to a point inside the loop.
Now we could trigger recomputation of components right there, too, but it looks like my suggestion will pull us deep into the rabbit hole again, which we should perhaps defer to a follow-up ticket.
@coehlrich I'll leave it to you to select between your previous version vs the one observing length of inferenceVariables. Either one should be OK for now, while the above observations can be treated separately.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@coehlrich I think this is basically good for M3 (deadline this Friday). Please let me know which variant you prefer for now: recomputing components in every iteration or making this depend on the number of inference variables (see previous comment).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
While I am reasonably confident that JLS wouldn't allow reducing/searching lambdas to add new dependencies between existing inference variables, I am less sure about the non JLS parts like the fix for https://bugs.eclipse.org/bugs/show_bug.cgi?id=452788 (bac0b68) and the fix for https://bugs.eclipse.org/bugs/show_bug.cgi?id=506016 (63495b6) with there being no new inference variables added later (also not helped by the fact that bugs.eclipse.org has been having issues).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If for nested lambdas only inference variables related to the return type get added early then that would only prevent components from updating if the function type is like Supplier which with this would be ok to reduce as soon as it is found.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
With GenericsRegressionTest_1_8.testBug512156_3 the B#5 and B#7 inference variable gets integrated into the outer most context in
Line 595 in 8486242
| if (!collectingInnerBounds(() -> reduceAndIncorporate(exprConstraint))) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am less sure about the non JLS parts like the fix for https://bugs.eclipse.org/bugs/show_bug.cgi?id=452788 (bac0b68)
That's an interesting reference! At that time (2014) the entire business of addConstraintsToC was still in flux.
Interestingly, most of bac0b68 doesn't seem to be needed anymore:
- it looks like field
AllocationExpression.outerInferenceContextwas never ever assigned any non-null value - reverting a lot of code from that commit doesn't cause any of our tests to fail
See experimental revert in #4624
ed91e35 to
bc7c7d4
Compare
|
Build failed due to timeout, which signals that we need a rebase on #4605 or later. @coehlrich I think this change has some relevant improvement, but the optimization to minimize recomputation of So, if you could provide the following changes today, I think we can safely submit this for M3:
Otherwise we will have to postpone this for the next release. |
bc7c7d4 to
2ac151f
Compare
What it does
Fixes #4557
I did try changing
eclipse.jdt.core/org.eclipse.jdt.core.compiler.batch/src/org/eclipse/jdt/internal/compiler/lookup/InferenceContext18.java
Line 713 in 545483d
to use
b3from the inner context but that caused a test to fail. I also tried collecting all of theb3s into a single bound set but that caused another test to fail and whenever I fixed a test another test would start failing. Eventually I found this bug report for JLS and implementing the suggestion in the latest comment to only add constraints from implicit lambdas when they are reduced and incorporated instead of adding the current bounds from every generic method allows the new test to succeed without causing any additional test failures.How to test
Attempt to compile
Author checklist
@stephan-herrmann