CopyPastor

Detecting plagiarism made easy.

Score: 0.8031697869300842; Reported for: String similarity Open both answers

Possible Plagiarism

Plagiarized on 2019-07-21
by David Maze

Original Post

Original - Posted on 2010-07-30
by Eric



            
Present in both answers; Present only in the new answer; Present only in the old answer;

I would absolutely 100% set up a separate test cluster. (...assuming a setup large enough where Kubernetes makes sense; I might consider an easier deployment system for a simple three-tier app like what you're describing.)
At a financial level this shouldn't make much difference to you. You'll need some amount of hardware to run the test copy of your application, and your organization will be paying for it whether it's in the same cluster or a different cluster. The additional cost will only be the cost of the management plane, which shouldn't be excessive.
At an operational level, there are all kinds of things that can go wrong during a deployment, and in particular there are cases where one Kubernetes resource can "step on" another. Deploying to a physically separate cluster helps minimize the risk of accidents in production; you won't accidentally overwrite the prod deployment's ConfigMap holding its database configuration, for example. If you have some sort of crash reporting or alerting set up, "it came from the test cluster" is a very clear check you can use to not wake up the DevOps team. It also gives you a place to try out possibly risky configuration changes: if you run your update script once in the test cluster and it passes then you can re-run it in prod, but if the first time you run it is in prod and it fails, that's an outage.
Depending on what you're using for a CI system, the other thing you can set up is fully automated deploys to the test environment. If a commit passes its own unit tests, you can have the test environment always running current `master` and run integration tests there. If and only if those integration tests pass, you can promote to the production environment.
Stop thinking of your application as a monolithic application. It is a set of UI screens that the user can interact with your "application", and "functions" provided via Android services.
Not knowing what your mysterious app "does" is not really important. Let's assume it tunnels into some super secure corporate intranet, performing some monitoring or interaction and stays logged in until the user "quits the application". Because your IT department commands it, users must be very conscious of when they are IN or OUT of the intranet. Hence your mindset of it being important for users to "quit".
This is simple. Make a service that puts an ongoing notification in the notification bar saying "I'm in the intranet, or I am running". Have that service perform all the functionality that you need for your application. Have activities that bind to that service to allow your users to access the bits of UI they need to interact with your "application". And have an Android Menu -> Quit (or logout, or whatever) button that tells the service to quit, then closes the activity itself.
This is, for all intents and purposes exactly what you say you want. Done the Android way. Look at Google Talk or Google Maps Navigation for examples of this "exit" is possible mentality. The only difference is that pressing back button out of your activity might leave your UNIX process lying in wait just in case the user wants to revive your application. This is really no different than a modern operating system that caches recently accessed files in memory. After you quit your windows program, most likely resources that it needed are still in memory, waiting to be replaced by other resources as they are loaded now that they are no longer needed. Android is the same thing.
I really don't see your problem.

        
Present in both answers; Present only in the new answer; Present only in the old answer;