I have a series of points around the greater london area (may be also around 100-200kms away from it) like those here in the format Lon, Lat:
43032.1,32681 95816.5,34473.8 15439.3,-13987
The Application to which those points are transferred uses leaflet but does not use any of EPSG3857, EPSG4326, EPSG3395 from http://leafletjs.com/reference.html unfortunately. At least that's what I found out when using the proj cmdline tool like this:
echo -48884.4 64535.7 | cs2cs -f "%.10f" +init=epsg:3395 +to +init=epsg:4326
I really tried a lot of possible projections and flipped the order but I cannot find a way to get a plausible result which should be something like 51.xxxx -0.1xxxxx
Does someone have any idea on what projection may be used here?
I'm not sure if this will help in your case, but this online tool may be worth a try. If you can create a shapefile of the points, this tool will attempt to guess the projection.
From Projection Guesser:
One of the joys of map making is getting a shapefile without a projection. We eventually decided to stop doing those puzzles manually and wrote something that harnesses the power of PostGIS to try every single projection in its database.
- Zip your .shp + .shx (you don't need the .dbf)
- Drag the .zip onto the map.
- Click on the shape that looks correct
- Click on the .prj link
- Save the contents of that page to your .prj file
It says it works best with single polygons; don't know if it will work on points.
I have heard rumors that configuring this also requires the User Profile service application to be configured so that the OAuth/claims augmentation/etc. stuff can all be in place. Unfortunately I haven't had the chance to confirm this yet.
My experience with this is I forgot to put the account I was logged in as on the SharePoint server in the Workflow Manager Admin group. Added the account, logged out and back in (so I would get a new token), and it worked fine. I guess there is an admin group for a reason, right? :)
I had a similar error but I got it working based on this blog.
Just to help anyone else who may experience the same, here's an excerpt:
You may receive one of the following errors and I’ve commented each with the resolution I used:
Register-SPWorkflowService : The caller does not have the necessary permissions required for this operation. Permissions granted: None. Required permissions: WriteScope. HTTP headers received from the server - ActivityId: 5e2b96c5-f971-48c9-b3fd-405c3616e1c7. NodeId: SP2. Scope: /SharePoint. Client ActivityId : 8e592951-0027-40c6-b996-ba3dd194fdea.
CONTOSOsvcSetupAcct is not a member of the workflow admin group, CONTOSOWFAdmins.
Add CONTOSOsvcSetupAcct to CONTOSOWFAdmins and re-run Register-SPWorkflowService PowerShell cmdlet. You may need to logout and log back in to acquire an updated security token.
requested by the login. The login failed.
Login failed for user 'CONTOSOsvcsetupacct'.
CONTOSOsvcSetupAcct has not been granted ShellAdmin access to the WSS_Content_WFTest content database.
Grant CONTOSOsvcSetupAcct shell admin access to the desired content database using PowerShell similar to the following: Add-SPShellAdmin CONTOSOsvcSetupAcct –database (Get-SPContentDatabase WSS_Content_WFTest)
Register-SPWorkflowService -SPSite url -WorkflowHostUri url Register-SPWorkflowService : Failed to query the OAuth S2S metadata endpoint at URI. Error details: 'The metadata endpoint responded with an error. HTTP status code: Forbidden.'. HTTP headers received from the server - ActivityId: b5163152-3e31-4809-a532-5e20d1320027. NodeId: WF. Scope: /SharePoint. Client ActivityId : b66b0ea4-d9a7-4d2d-8be8-3a0c58ab728c.
Incorrect use of parameters
Notice that the SharePoint site is non-SSL, but the parameter for –AllowOAuthHttp was not specified. For a non-SSL SharePoint site, the parameter –AllowOAuthHttp must be used.
11 Answers 11
The main point is that we make a sharp distinction between obscurity and secrecy if we must narrow the difference down to a single property, then that must be measurability. Is secret that which is not known to outsiders, and we know how much it is unknown to these outsiders. For instance, a 128-bit symmetric key is a sequence of 128 bits, such that all 2 128 possible sequences would stand an equal probability of being used, so the attacker trying to guess such a key needs to try out, on average, at least 2 127 of them before hitting the right one. That's quantitative. We can do math, add figures, and compute attack cost.
The same would apply to a RSA private key. The maths are more complex because the most effective known methods rely on integer factorization and the involved algorithms are not as easy to quantify as brute force on a symmetric key (there are a lot of details on RAM usage and parallelism or lack thereof). But that's still secrecy.
By contrast, an obscure algorithm is "secret" only as long as the attacker does not work out the algorithm details, and that depends on a lot of factors: accessibility to hardware implementing the algorithm, skills at reverse-engineering, and smartness. We do not have a useful way to measure how smart someone can be. So a secret algorithm cannot be "secret". We have another term for that, and that's "obscure".
We want to do security through secrecy because security is risk management: we accept the overhead of using a security system because we can measure how much it costs us to use it, and how much it reduces the risk of successful attacks, and we can then balance the costs to take an informed decision. This may work only because we can put numbers on risks of successful attacks, and this can be done only with secrecy, not with obscurity.
I think that the term "security through obscurity" gets misused quite often.
The most frequently referred to quote when talking about security through obscurity is Kerckhoffs's principle.
It must not be required to be secret, and it must be able to fall into the hands of the enemy without inconvenience
Security through obscurity is referring to relying on keeping the design and implementation of a security system secure by hiding the details from an attacker. This isn't very reliable as systems and protocols can be reverse engineered and taken apart given enough time. Also, a system that relies on hiding it's implementation cannot depend on experts examining it for weaknesses, which probably leads to more security flaws than a system that has been examined, has the bugs made known and fixed.
Take RSA for example. Everyone in the world knows how it works. Well, everyone that has a good grasp of the mathematics involved anyhow. It is well studied and relies on difficult mathematical problems. However, given what we know about the mathematics involved, it is secure provided the values of p and q are kept a secret. This is essentially concentrating the work of breaking (and protecting) the system into one secret that can be protected.
Compare this with an encryption algorithm that does not subscribe to Kerckhoffs's principle. Instead of using a publicly known scheme that uses a secret key, this encryption algorithm is secret. Anyone that knows the algorithm can decrypt any data encrypted with the algorithm. This is very difficult to secure as the algorithm will be nearly impossible to keep out of the hands of an enemy. See the Enigma machine for a good example of this.
Security is all about keeping secrets, but good security lies in knowing which secrets you can keep, and which you cannot.
And in particular, the best security protocols are built around the principle of factoring the secret out of the design, so that your secret can be kept without having to keep the design secret as well. This is particularly important because system designs are notoriously impossible to keep secret. This is the core of Kerckhoffs's principle, which goes back to the design of old military encryption machines.
In other words, if you algorithm is your secret, then anyone who sees an implementation of your algorithm -- anyone who has your hardware, anyone who has your software, anyone who uses your service -- has seen your secret. The algorithm is a terrible place to put your secrets, because algorithms are so easy to examine. Plus, secrets embedded into designs can't be changed without changing your implementation. You're stuck with the same secret forever.
But if your machine doesn't need to be kept secret, if you've designed your system such that the secret is independent of the machine -- some secret key or password -- then your system will remain secure even after the device is examined by your enemies, hackers, customers, etc. This way you can focus your attention in protecting just the password, while remaining confident that your system can't be broken without it.
The critical difference is in what is kept secret.
Take RSA as an example. The core principle of RSA is simple mathematics. Anyone with a little mathematical knowledge can figure out how RSA works functionally (the math is almost half a millenium old). It takes more imagination and experience to figure out how you could leverage that for security, but it has been done independently at least twice (by Rivest, Shamir and Adleman, and a few years before by Clifford Cocks). If you design something like RSA and keep it secret, there's a good chance that someone else will be clever enough to figure it out.
On the other hand, a private key is generated at random. When done correctly, random generation ensures that it is impossible to reconstruct the secret with humanly available computing power. No amount of cleverness will allow anyone to reconstruct a secret string of random bits, because that string has no structure to intuit.
Cryptographic algorithms are invented out of cleverness, with largely-shared goals (protect some data, implement the algorithm inexpensively, …). There's a good chance that clever people will converge onto the same algorithm. On the other hand, random strings of secret bits are plentiful, and by definition people won't come up with the same random string¹. So if you design your own algorithm, there's a good chance that your neighbor will design the same. And if you share your algorithm with your buddy and then want to communicate privately from him, you'll need a new algorithm. But if you generate a secret key, it'll be distinct from your neighbor's, and your buddy's. There's definitely potential value in keeping a random key secret, which is not the case for keeping an algorithm secret.
A secondary point about key secrecy is that it can be measured. With a good random generator, if you generate a random n-bit string and keep it secret, there is a probability of 1/2^n that someone else will find it in one try. If you design an algorithm, the risk that someone else will figure it out cannot be measured.
RSA private keys aren't a simple random string — they do have some structure, being a pair of prime numbers. However the amount of entropy — the number of possible RSA keys of a certain size — is large enough to make one practically unguessable. (As for RSA keys being practically impossible to reconstruct from a public key and a bunch of plaintexts and ciphertexts, that's something we can't prove mathematically, but we believe to be the case because lots of clever people have tried and failed. But that's another story.)
Of course this generalizes to any cryptographic algorithm. Keep random strings secret. Publish clever designs.
This isn't to say that everything should be made public except for the small part that's a random bunch of bits. Kerckhoff's principle doesn't say that — it says that the security of the design should not rely on the secrecy of the design. While cryptographic algorithms are best published (and you should wait a decade or so before using them to see if enough people have failed to break them), there are other security measures that are best kept secret, in particular security measures that require active probing to figure out. For example, some firewall rules can fall into this category however a firewall that doesn't offer protection against an attacker who knows the rules would be useless, since eventually someone will figure them out.
¹ While this is not true mathematically speaking, you literally can bet on it.
20 Answers 20
I had this problem with VS 2010, and it was as simple as terminating the "WebDev.WebServer40.EXE" process. Although the icon was no longer showing in the system tray, the process was still running.
Could be a number of things. try these (check the last one first).
- Disable IPv6
- Make sure there isnt an edit in the hosts file for localhost
- Check firewall/virus settings to allow connections to/from devenv.exe
- If you can preview in the browser make sure the URL in the browser uses the same port number as the port number shown in the ASP.NET dev server taskbar icon.
- Try setting a fixed, predefined port in project properties
I got these from a couple of forums elsewhere, hopefully they can help. Good luck. Let us know what works and some more about your environment (firewall, anti virus etc) can help as well.
Under project settings, try specifying a different port like 64773 for example. I have encountered this issue many times and it has always worked for me.
It cause the already that project port server is running in the current thread. You need to end process using task manager.
Pres Ctrl+Alt+Delete (Task Manager)
find the asp.net server like WebDev.WebServer40.exe for VS2010 and press end process.
Now u continue with vs2010 run button
I went to the project file and changed the development server port to 1504. Well 1504 worked on another project for me, so I went with that. Hope this helps.
I have tried all of the above solutions and others from other websites too but with no luck.
What worked for me, was to rename or delete the applicationhost file:
C:UsersUserDocumentsIISExpressconfigapplicationhost < rename or delete.
That is very odd! I hate to suggest something as simple as restarting Visual Studio. but that is what sounds like the best first place to start. Also, check your project settings. As you said that you just downloaded this and tried to run it. perhaps the solution/project is not set up to use the Casini server that is shipped with Visual Studio?
- 'Website' Menu in your visual studio ide.
- select 'Start Options'
- enable 'Use Custom Server' radio button.
- Enter any URL you desire similar to 'http://localhost:8010/MyApp'
Note1: you can use any port number not only '8010' but not designated port numbers like 8080(tcpip),25(smtp),21(ftp) etc.,
Note2: you can use any name not only 'MyApp'
This solution works for sure unless your WebDev.Webserver.exe is physically corrupted.
Error 1) Unable to connect Asp.net development server ? Answer: No way find for that error
Try 1) Step 1: Select the “Tools->External Tools” menu option in VS or Visual Web Developer. This will allow you to configure and add new menu items to your Tools menu.
web-server that VS usually automatically runs).
and then choose this menu option to launch a web-server that has a root site on port 8010 (or whatever other port you want) for the project.
web-server itself. To-do this, select your web-project in the solution explorer, right click and select “property pages”. Select the “start options” setting on the left, and
under server change the radio button value from the default (which is use built-in webserver) to instead be “Use custom server”. Then set the Base URL value to be:
Development Server is not available because it is already used by another web server.
5 Answers 5
There is generally not a single formal model you can just adopt -- and for a good reason. The right access control structure needs to depend upon the specifics of your application, so there is no one-size-fits-all answer.
In general, figuring out how to define access control involves looking at a few questions:
What are the resources I need to protect? What are the resources that are most security-critical? This will depend upon your application, but it might be things like individual blog posts, the comments on a blog post, or the theme/layout for a particular blog.
What are the actions one can perform on those resources? Next, define the operations or actions someone might want to perform on those resources. Often these are the basics like view (read), edit (write), create, and delete.
Who are the individuals who might want to perform such an action on such a resource? In access control parlance, these are sometimes called the "principals". For instance, this might be the set of users with accounts on your site (each user is a principal). You might also define groups (e.g., all administrators all users associated with a particular blog), or you might allow users of your site to define groups.
What limits do I want to put on these actions? You can define a security policy. One way to define a security policy is to determine, for each resource, what actions each principal can perform on that resource. This might be described more concisely in terms of groups (e.g., any administrator can edit any blog post). You can decide whether a single, fixed security policy is more appropriate for your application, or whether it is more useful to allow the users to determine the security policy. Often it is way too much to expect users to write down all allowed combinations of (principal, action, resource), so you'll need to think about what are some common policies that might make sense for your application, to make it easier for users.
A few other comments.
I would suggest you look at some other example applications that are similar to the one you want to build, to see how they do access control. There are many blogging platforms why don't you take a look to see how they allow blog administrators and bloggers to control access to the blog? See what you like and what you don't, and what seems like it works well for users and what doesn't, and learn from that.
Also, I want to introduce you to the concept of user-driven access control, where instead of you (the software developer/administrator) trying to set a security policy for your site, you allow users to determine their own security policy through their own actions. For instance, each blog post might come with its own "view" link that users can share with others if you share this link with Alice, then Alice will be able to view the blog post. You can include a random unguessable token in this link, and then knowledge of the URL is all that is needed to view the blog post. As another example, each blog post might come with an "edit" link that the blog owner can share with someone else to let them edit the blog post collaboratively -- think how you can share a Google Docs document with someone else, for instance. In this model, each of these links is a capability that grants authorization to perform a certain action on a particular resource or collection of resources. In some circumstances, this approach can be useful, because it provides users with the flexibility to determine their own secure sharing patterns.
Recent economic and environmental constraints push supply chain management systems to adopt closed-loop supply chain operating modes that have to address very complex problems including the end-user quality of services, environmental considerations, and daily transportation time variations. Relevant and challenging research areas require a proper coordination between the data provider software (Transport Management Software) and the operational research tool in charge of trip definition.
This paper proposes a decision support system applied to the Vehicle Routing Problem able to tackle very large instances with real-life constraints. Our contribution is to propose an architecture that handle both static resolution prior to the completion of routes and update them in a dynamical context during their completions. This is implemented through a REST based API using numerous state-of-the-art operational research methods. Moreover, this system in used in practice by the Mapotempo company.
Directory Traversal: What effect does this '?' and '.' have on the url?
I asked a question on this very site - Unable to understand why the web app is vulnerable to a Directory traversal attack , where i was given a report stating my web-app was vulnerable.
I posted few samples from the report, like Testing Path: http://127.0.0.1:80/??/etc/issue <- VULNERABLE! , now i was asked what those two /?? are in the posted url.
I ran few tests:
http://127.0.0.1:80/??/etc/issue returns Home page.
http://127.0.0.1:80/.?/etc/issue returns Home page.
http://127.0.0.1:80/?./etc/issue returns Home page.
So, the pattern below returns home page:
http://127.0.0.1:80/Position1Position2Anything/Anythingcouldbehere , where
If Position1 = ? , home page is returned irrespective of the contents at Position2 .
If Position1 = . then Position2 must be ? , for the home page.
Anything could be an empty string too.
Now, anything which doesn't match the pattern above returns 400/404.
And, i ran the above test for security.stackexchange.com/ and it too returned the same result (followed the same pattern of . and ? ) and returned its Home page on the browser.
Please explain the role of ? and . in the urls.
It's only this pattern(the one above, with ? and . ) which makes the web-app Vulnerable to Directory Traversal attack as per the report sent by pen-testers.
What is the best procedure or program for creating a 'realistic' worldmap to suit my setting?
I have a fairly-straightforward alternate modern-day Earth setting, somewhat similar to Ace Combat's Strangereal or the 'America' seen in Rockstar North's GTA series (real life, with the serial numbers filed off).
I'd like to take my counterpart countries etc and place them on a map that's reminiscent of Earth - similar proportions of water to land, ice to forest, etc - but which doesn't reflect the continental or country layout of our home planet.
It would be nice to accommodate (even coarsely) procedures like erosion, rain shadows, tectonic plate movement, etc in the production of this world / worldmap, but unfortunately I am no geologist / tectonic engineer.
It's a somewhat grandiose question, but. Is there a means to automate the process of building a mostly-accurate new planet? Can you recommend a piece of software, a web app or a guide I can follow to construct my globe?
This kinda imaginative exercise is quite new to me - any suggestions would be appreciated! Thank you!
I promised nothing! Extending the above monkey-patch to tee stdout and stderr is left as an exercise to the reader with a barrel-full of free time. ("It ain't me, babe.") [Links to meme material on YouTube]
Feast on the unexpected awesome of bear typing:
So what's the rub, bub?
To prevent well-meaning (but sadly small-minded) coworkers from removing the type checking you silently added after last Friday's caffeine-addled allnighter to your geriatric legacy Django web app, type checking must be fast. So fast that no one notices it's there when you add it without telling anyone. I do this all the time! Stop reading this if you are a coworker.
Just because. Welcome to bear typing.
Correct way to handle security threats to web server on budget [closed]
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
During our annual security review I was reminded of an incident earlier this year where we received a threat to our organizations web server. It was over a organization policy and threatened to DDoS our site. Fortunately, nothing bad came of it and it turned out to be an empty threat. However, we still notified the CIO, CSO, and CEO and our hosting provider immediately who applauded our response. Due to the nature of our organization(in education) the preemptive response involved many people including coordination with local law enforcement.
Even though our response was plenty for an empty threat it is making me realize how little attack planning the web app has undergone. Right now the setup is:
- A Linode VPS that is not behind the enterprise firewall (there's a long story behind this that isn't worth explaining)
- a PostgreSQL DB on the same server that only allows local connections
- a Nginx server that we are currently following best practices to secure 
- SSH access that we are migrating to certificate authentication
- A backup VPS that has all the latest server settings and just needs the latest version of code pushed and database settings migrated (Right now used as a test server but also envisioned as a georedundancy option)
I guess my question can probably be boiled down to what other steps should I take to lock down my server as well as protect from DDoS? We would love to use Cloudflare Business with their DDoS protection, but we don't always need it and $200 a month is a bit steep for the organization. Do I even need this? Is there a solution that allows temporary DDoS protection? If not, what is the best way to maintain stability during/after an attack? Finally, what logging should be implemented so that we can assist law enforcement in the event an attack occurs?
How to deal with a company that doesn't fix (potential) security vulnerabilities in their web app?
About 2 weeks ago, I stumbled across a web application, that can be used by gyms to manage the information about their members. This includes data like the name, billing address, birth date, and medical history. The gym I am visiting (in Europe) is also using this application and so I took a closer look at the application. I didn't dig very deep to avoid legal issues, but these are some of the "problems" I found:
- The login allows infinite tries
- The JSON response from the backend includes information whether the username or password was incorrect
- The user password is stored in the local storage in plain text
- There is an unrestricted file upload for profile pictures
- An old PHP version is used
- There are multiple backends that throw exceptions (this way I could find out which PHP framework they are using)
- Session IDs can be overwritten (Session fixation)
- It seems like there is no input validation. They are using React, so XSS is not as easy but still possible
All of these don't seem like super-critical to me, unless someone really takes their time and tries to exploit these potential vulnerabilities. From what I can tell, there are least 20,000 customers stored in their database. Also it seems like all the customer data is stored in one big table for all the different gyms that are using this application.
The kind of data that is stored about the customers seems to be very personal and shouldn't be in the wrong hands I guess. So I contacted this company anonymously and told them about my concerns. They responded to me a few days ago and said that they fixed everything - however I checked it and basically nothing changed in this web application (still the same vulnerabilities).
So here is my question: How should I proceed? Should I give them a second chance or contact some kind of data protection authority? And would you consider these problems/vulnerabilities critical? (like already said: I didn't dig too deep, but even with my limited security knowledge I think I could get most of the user data into my hands within a few days)