Renku messages are very ambiguous for example: "Error while loading project configuration" when opening a project

Getting “Error while loading project configuration”. - when I open a project. There is no clue as to what the possible problem might be. At least if I saw the exception, it might present some clue to what has failed. Version of renku is 0.63.0.
In my experience, Renku messages are very ambiguous and extremely difficult to debug. Could I ask that more descriptive messages are added to the code?

Renku version [0.63.0](https://github.com/SwissDataScienceCenter/renku/releases/tag/0.63.0)

Renku component versions

* UI: [3.45.0](https://github.com/SwissDataScienceCenter/renku-ui/releases/tag/3.45.0)
* Core: [v2.9.2](https://github.com/SwissDataScienceCenter/renku-python/releases/tag/v2.9.2)
* Data Services: [v0.29.0](https://github.com/SwissDataScienceCenter/renku-data-services/releases/tag/v0.29.0)
* Knowledge Graph: [2.50.0](https://github.com/SwissDataScienceCenter/renku-graph/releases/tag/2.50.0)
* Notebooks: [1.27.1](https://github.com/SwissDataScienceCenter/renku-notebooks/releases/tag/1.27.1)
* Search: [v0.7.0](https://github.com/SwissDataScienceCenter/renku-search/releases/tag/v0.7.0)

Just to note, its the session class that is failing to load in this case.

This is the part of the screen that fails to load and presents “Error While Loading Project Configuration”

This is an example of the request that seems to be timing out. But, if I close the project, and go back in, then it works

{"time":"2025-01-17T11:28:42.239948441Z","level":"INFO","msg":"PROXY","requestID":"xx","destination":"http://renku-data-service"}

{"time":"2025-01-17T11:28:42.258015679Z","level":"INFO","msg":"REQUEST","uri":"/api/data/sessions","status":200,"requestID":"xx","method":"GET","handler":"/api/data/*","userAgent":"node-fetch/1.0 (+https://github.com/bitinn/node-fetch)"}

{"time":"2025-01-17T11:28:44.51705017Z","level":"INFO","msg":"PROXY","requestID":"xx","destination":"http://renku-uiserver:80"}

{"time":"2025-01-17T11:28:47.247059994Z","level":"INFO","msg":"PROXY","requestID":"xx","destination":"http://renku-notebooks"}

{"time":"2025-01-17T11:28:47.247106994Z","level":"INFO","msg":"PROXY","requestID":"xx","destination":"http://renku-data-service"}

{"time":"2025-01-17T11:28:47.264988247Z","level":"INFO","msg":"REQUEST","uri":"/api/data/sessions","status":200,"requestID":"xx","method":"GET","handler":"/api/data/*","userAgent":"node-fetch/1.0 (+https://github.com/bitinn/node-fetch)"}

{"time":"2025-01-17T11:28:47.279505847Z","level":"INFO","msg":"REQUEST","uri":"/api/notebooks/servers","status":200,"requestID":"xx","method":"GET","handler":"/api/notebooks/*","userAgent":"node-fetch/1.0 (+https://github.com/bitinn/node-fetch)"}

{"time":"2025-01-17T11:28:52.254171781Z","level":"INFO","msg":"PROXY","requestID":"xx","destination":"http://renku-notebooks"}

{"time":"2025-01-17T11:28:52.254979942Z","level":"INFO","msg":"PROXY","requestID":"xx","destination":"http://renku-data-service"}

Deleting the gateway pods - may have sorted the problem in the short term.

kubectl delete pod -n renku renku-gateway-8656fccb8c
kubectl delete pod -n renku renku-gateway-8656fccb8b

Hello! The team members who could help with this are in an all-day meeting today, but I have alerted them to this thread, and someone will get back to you soon (though it might be next week).

@diarmuidcire yes we definitely need better error messages. However I cannot promise when this work will happen. It is a bit hard to prioritize features for renku admins over the many renku users and missing features we want to add for them.

Is this the same renku deployment as some of the other questions you have posted on this forum? If yes then I think this may be caused by some of the networking issues you have described in other posts. And as far as I can tell those networking issues are still not fully resolved.

Yes, its the same cluster.
Maybe figuring out the networking issues is the best option.
The solution in the short term was to restart the gateway pods.