You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@ignite.apache.org by "steve.hostettler" <st...@gmail.com> on 2020/07/10 14:58:25 UTC
Ignite on AKS and RBAC issue
Hello,
I am deploying an embeded version of ignite on AKS and I am getting this
error:
Caused by: java.io.IOException: Server returned HTTP response code: 403 for
URL:
https://kubernetes.default.svc.cluster.local:443/api/v1/namespaces/default/endpoints/processing-engine-pe-v1-ignite
at
sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1900)
at
sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1498)
That sounds like a problem with the RBAC to me but I cannot nail it down.
So let me give my current configuration:
NAME READY STATUS RESTARTS
AGE
processing-engine-pe-v1.master-69668fcb5b-zm7m8 1/1 Running 0
9m6s
processing-engine-pe-v1.worker-7598949c5d-pkbfg 1/1 Running 0
9m6s
As you can see 2 pods on the default namespace
So the configuration is
<bean id="tcpDiscoveryKubernetesIpFinder"
class="org.apache.ignite.spi.discovery.tcp.ipfinder.kubernetes.TcpDiscoveryKubernetesIpFinder">
<property name="namespace" value="default" />
<property name="serviceName" value="processing-engine-pe-v1-ignite"
/>
</bean>
The service is there
kubectl describe svc processing-engine-pe-v1-ignite
Name: processing-engine-pe-v1-ignite
Namespace: default
Labels: app.kubernetes.io/managed-by=Helm
Annotations: meta.helm.sh/release-name: pe-v1
meta.helm.sh/release-namespace: default
Selector: type=processing-engine-pe-v1.node
Type: ClusterIP
IP: None
Port: service-discovery 47500/TCP
TargetPort: 47500/TCP
Endpoints: 10.244.0.31:47500,10.244.1.28:47500
Session Affinity: None
Events: <none>
The service account
kubectl describe serviceaccount ignite
Name: ignite
Namespace: default
Labels: app.kubernetes.io/managed-by=Helm
Annotations: meta.helm.sh/release-name: pe-v1
meta.helm.sh/release-namespace: default
Image pull secrets: <none>
Mountable secrets: **********
Tokens: **********
Events: <none>
The role
kubectl describe clusterrole ignite
Name: ignite
Labels: app.kubernetes.io/managed-by=Helm
release=pe-v1
Annotations: meta.helm.sh/release-name: pe-v1
meta.helm.sh/release-namespace: default
PolicyRule:
Resources Non-Resource URLs Resource Names Verbs
--------- ----------------- -------------- -----
endpoints [] [] [get list watch]
pods [] [] [get list watch]
The role binding
kubectl describe clusterrolebinding ignite
Name: ignite
Labels: app.kubernetes.io/managed-by=Helm
release=pe-v1
Annotations: meta.helm.sh/release-name: pe-v1
meta.helm.sh/release-namespace: default
Role:
Kind: ClusterRole
Name: ignite
Subjects:
Kind Name Namespace
---- ---- ---------
ServiceAccount ignite default
Any idea of what I am missing?
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: Ignite on AKS and RBAC issue
Posted by "steve.hostettler" <st...@gmail.com>.
I found my mistake, it was indeed a glitch in the deployment yaml as I forgot
to specify the service account.
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: Ignite on AKS and RBAC issue
Posted by "steve.hostettler" <st...@gmail.com>.
Hello Alex, thanks for the tip but putting everything on the namespace ignite
does not help.
I also rechecked the documentation. I still get the 403.
Additional question : how does the service account and the service relate?
So I have a
1) service account
kubectl describe serviceaccount ignite -n ignite
Name: ignite
Namespace: ignite
Labels: app.kubernetes.io/managed-by=Helm
Annotations: meta.helm.sh/release-name: pe-v1
meta.helm.sh/release-namespace: ignite
Image pull secrets: <none>
Mountable secrets: ignite-token-htqrp
Tokens: ignite-token-htqrp
Events: <none>
2) a clusterrole
kubectl describe clusterrole ignite -n ignite
Name: ignite
Labels: app.kubernetes.io/managed-by=Helm
release=pe-v1
Annotations: meta.helm.sh/release-name: pe-v1
meta.helm.sh/release-namespace: ignite
PolicyRule:
Resources Non-Resource URLs Resource Names Verbs
--------- ----------------- -------------- -----
endpoints [] [] [get list watch]
pods [] [] [get list watch]
3) a clusterrolebinding
kubectl describe clusterrolebinding ignite -n ignite
Name: ignite
Labels: app.kubernetes.io/managed-by=Helm
release=pe-v1
Annotations: meta.helm.sh/release-name: pe-v1
meta.helm.sh/release-namespace: ignite
Role:
Kind: ClusterRole
Name: ignite
Subjects:
Kind Name Namespace
---- ---- ---------
ServiceAccount ignite ignite
4)a service
kubectl describe svc processing-engine-pe-v1-ignite -n ignite
Name: processing-engine-pe-v1-ignite
Namespace: ignite
Labels: app.kubernetes.io/managed-by=Helm
Annotations: meta.helm.sh/release-name: pe-v1
meta.helm.sh/release-namespace: ignite
Selector: type=processing-engine-pe-v1.node
Type: ClusterIP
IP: None
Port: service-discovery 47500/TCP
TargetPort: 47500/TCP
Endpoints: 10.244.0.34:47500,10.244.1.31:47500
Session Affinity: None
Events: <none>
But somehow I still get a 403
2020-07-10 22:08:51,837 INFO
[org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi] (ServerService Thread
Pool -- 15) Successfully bound to TCP port [port=47500,
localHost=0.0.0.0/0.0.0.0, locNodeId=c651239a-2964-4b8b-915b-c055bcf410ed]
2020-07-10 22:08:52,029 ERROR
[org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi] (ServerService Thread
Pool -- 15) Failed to get registered addresses from IP finder on start
(retrying every 2000ms; change 'reconnectDelay' to configure the frequency
of retries).: class org.apache.ignite.spi.IgniteSpiException: Failed to
retrieve Ignite pods IP addresses.
at
org.apache.ignite.spi.discovery.tcp.ipfinder.kubernetes.TcpDiscoveryKubernetesIpFinder.getRegisteredAddresses(TcpDiscoveryKubernetesIpFinder.java:172)
at
org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi.registeredAddresses(TcpDiscoverySpi.java:1900)
at
org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi.resolvedAddresses(TcpDiscoverySpi.java:1848)
at
org.jboss.threads.EnhancedQueueExecutor$ThreadBody.run(EnhancedQueueExecutor.java:1377)
at java.lang.Thread.run(Thread.java:748)
at org.jboss.threads.JBossThread.run(JBossThread.java:485)
Caused by: java.io.IOException: Server returned HTTP response code: 403 for
URL:
https://kubernetes.default.svc.cluster.local:443/api/v1/namespaces/ignite/endpoints/processing-engine-pe-v1-ignite
at
sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1900)
2020-07-10 22:13:47,219 ERROR [org.jboss.as.controller.management-operation]
(main) WFLYCTL0190: Step handler
org.jboss.as.server.deployment.DeploymentHandlerUtil$1@63778853 for
operation add at address [("deployment" => "reg.war")] failed handling
operation rollback -- java.util.concurrent.TimeoutException:
java.util.concurrent.TimeoutException
at
org.jboss.as.controller.OperationContextImpl.waitForRemovals(OperationContextImpl.java:523)
at org.wildfly.swarm.bootstrap.Main.main(Main.java:87)
2020-07-10 22:13:52,220 ERROR [org.jboss.as.controller.management-operation]
(main) WFLYCTL0349: Timeout after [5] seconds waiting for service container
stability while finalizing an operation. Process must be restarted. Step
that first updated the service container was 'add' at address
'[("deployment" => "reg.war")]'
2020-07-10 22:13:52,225 ERROR [stderr] (main)
org.wildfly.swarm.container.DeploymentException:
org.wildfly.swarm.container.DeploymentException: THORN0004: Deployment
failed: WFLYCTL0344: Operation timed out awaiting service container
stability
2020-07-10 22:13:52,226 ERROR [stderr] (main) at
org.wildfly.swarm.container.runtime.RuntimeDeployer.deploy(RuntimeDeployer.java:301)
2020-07-10 22:13:52,230 ERROR [stderr] (main) at
org.wildfly.swarm.bootstrap.Main.main(Main.java:87)
2020-07-10 22:13:52,230 ERROR [stderr] (main) Caused by:
org.wildfly.swarm.container.DeploymentException: THORN0004: Deployment
failed: WFLYCTL0344: Operation timed out awaiting service container
stability
2020-07-10 22:13:52,230 ERROR [stderr] (main) at
org.wildfly.swarm.container.runtime.RuntimeDeployer.deploy(RuntimeDeployer.java:296)
2020-07-10 22:13:52,230 ERROR [stderr] (main) ... 22 more
2020-07-10 22:13:52,808 ERROR
[org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi] (ServerService Thread
Pool -- 15) Failed to get registered addresses from IP finder on start
(retrying every 2000ms; change 'reconnectDelay' to configure the frequency
of retries).: class org.apache.ignite.spi.IgniteSpiException: Failed to
retrieve Ignite pods IP addresses.
at
org.apache.ignite.spi.discovery.tcp.ipfinder.kubernetes.TcpDiscoveryKubernetesIpFinder.getRegisteredAddresses(TcpDiscoveryKubernetesIpFinder.java:172)
at
org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi.registeredAddresses(TcpDiscoverySpi.java:1900)
at org.jboss.threads.JBossThread.run(JBossThread.java:485)
Caused by: java.io.IOException: Server returned HTTP response code: 403 for
URL:
https://kubernetes.default.svc.cluster.local:443/api/v1/namespaces/ignite/endpoints/processing-engine-pe-v1-ignite
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: Ignite on AKS and RBAC issue
Posted by akorensh <al...@gmail.com>.
Hi,
I see that you've bound everything to the "default" namespace.
kubectl describe serviceaccount ignite
Name: ignite
Namespace: default
Make everything in the "ignite" namespace as described here:
https://apacheignite.readme.io/docs/rbac-authorization
Follow there recommendations to deply on K8:
https://apacheignite.readme.io/docs/stateless-deployment
If that doesn't work send over all your yaml files and I'll take a look.
Thanks, Alex
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/