Why is context menu not showing?

I am trying to create and show a context menu for arcmap desktop 10.2. I have followed the steps in the link provided by esri, which explains that I must override OnContextMenu. I am not sure if I am doing it right and need your help.

What I did was, I got the same code that is provided on the webpage and pasted it in my project, but nothing happens when I right click on the map. I am sure I am missing something… do I have to register the context menu somewhere ?

I am a nooby it is obvious from my other questions, I do a lot of reading, but it gets very confusing, so I need someone to simplify things a bit.

Here is my understanding of what I should do, and correct me where I am wrong. According to the link provided below, there are two ways to get a context menu to show. either by implementing the ITool.OnContextMenu in case I want the menu to show for a specific tool, or by implementing the IDocumentEvents.OnContextMenu, which for both cases the event OnContextMenu will automatically fire when the user right-clicks on the active view.

The implementation for the event is also provided in the link, which I copied and pasted into VStudio… now comes the part where I am confused about. how do I get the event to fire ?. When running the project arcmap starts, I right click on the active view, but no menu pops up. do I have to create the menu myself using windows forms?

http://help.arcgis.com/en/sdk/10.0/arcobjects_net/conceptualhelp/index.html#//0001000004p9000000

Update, here is a chunk of the code, I want to start listening to those events OnMouseDown and OnContextMenu in this part

public class PTCExternalComponent : BaseTool, ISchematicXmlGenerate, ISchematicXmlUpdate{ private Application _application; public void GenerateXmlData(string diagramName, string diagramClassName, ref object xmlSource, ref bool cancel) { cancel = false; try { var app = NEAutomation2.Application.Instance().IApplication; var nameofActivetool = app.CurrentTool.Command.Name; var activeTool = (ITool)app.CurrentTool; var toolIndex = app.CurrentTool.Index; //((IToolbarControlEvents).OnItemClick(ss); // activeTool.OnMouseDown(0, 0, 0, 0); var diagramGenerator = DiagramGenerationFactory.GetDiagramGenerator( diagramName, diagramClassName); if (diagramGenerator == null) { Logger.ShowMessageDialog("Unable to find a Diagram Generator for diagram type [" + diagramClassName + "]", true); } else { bool canceled; var diagram = diagramGenerator.GenerateDiagram(out canceled); var sxg = new SchematicXmlGenerator(diagram, false); // for debugging if (diagram != null) xmlSource = sxg.GenerateDiagramXml(); } } catch (Exception e) { Logger.ShowMessageDialog( "An exception occurred while trying to generate diagram of type " + diagramClassName + "; Exception " + e.Message, true); ProgressTrackingUtility.HideProgressBar(); } } public override void OnMouseDown(int button, int shift, int i, int i1) { // do stuff here } public override bool OnContextMenu(int x, int y) { //do stuff here throw new NotImplementedException(); } object ISchematicXmlGenerate.ApplicationHook { get { return _application as AppRef; } set { _application = value as Application; } } Rest of Code…

Implementing BaseTool is the old way, there are quite a few things you will need to do to get ArcMap to 'see' the tool. I would strongly recommend creating a new project starting as an ArcGis addin, which is an available option if you have the ArcObjects SDK installed… this does the hard part for you. If you choose to do it that way it would be much easier as everything comes pre-wired and you would need to implementISchematicXmlGenerate, ISchematicXmlUpdateon top ofESRI.ArcGIS.Desktop.AddIns.Tool.

If you want to persist with what you've got I can give you an example of functions from one of my old tools calledEddie_Livesin thenamespace HandiToolBar:

[Guid("e272d412-85b4-4c5c-bfcf-33dee1dd3003")] [ClassInterface(ClassInterfaceType.None)] [ProgId("HandiToolBar.Eddie_Lives")] public sealed class Eddie_Lives : BaseTool { #region COM Registration Function(s) [ComRegisterFunction()] [ComVisible(false)] static void RegisterFunction(Type registerType) { // Required for ArcGIS Component Category Registrar support ArcGISCategoryRegistration(registerType); } [ComUnregisterFunction()] [ComVisible(false)] static void UnregisterFunction(Type registerType) { // Required for ArcGIS Component Category Registrar support ArcGISCategoryUnregistration(registerType); } #region ArcGIS Component Category Registrar generated code ///  /// Required method for ArcGIS Component Category registration - /// Do not modify the contents of this method with the code editor. ///  private static void ArcGISCategoryRegistration(Type registerType) { string regKey = string.Format("HKEY_CLASSES_ROOTCLSID{{{0}}}", registerType.GUID); MxCommands.Register(regKey); } ///  /// Required method for ArcGIS Component Category unregistration - /// Do not modify the contents of this method with the code editor. ///  private static void ArcGISCategoryUnregistration(Type registerType) { string regKey = string.Format("HKEY_CLASSES_ROOTCLSID{{{0}}}", registerType.GUID); MxCommands.Unregister(regKey); } #endregion #endregion

Obviously you will need to generate your own GUID because this one is already taken.

Then in the class initializer:

public Eddie_Lives() { base.m_category = "my Tools"; //localizable text base.m_caption = "Eddie, the editor."; //localizable text base.m_message = "Opens the eddie editor interface."; //localizable text base.m_toolTip = "Opens the eddie editor interface."; //localizable text base.m_name = "Eddie Lives!"; //unique id, non-localizable (e.g. "MyCategory_ArcMapTool") try { // // TODO: change resource name if necessary // //string bitmapResourceName = GetType().Name + ".bmp"; base.m_bitmap = Properties.Resources.Eddie.ToBitmap(); //base.m_cursor = new System.Windows.Forms.Cursor(GetType(), GetType().Name + ".cur"); } catch (Exception ex) { System.Diagnostics.Trace.WriteLine(ex.Message, "Invalid Bitmap"); } }

base is a property of the BaseTool which you're implementing.

Then in the overriden class OnCreate:

public override void OnCreate(object hook) { m_application = hook as IApplication; //Disable if it is not ArcMap if (hook is IMxApplication) { base.m_enabled = true; // enable the tool

After you've jumped through all those hoops you can add the compiled dll (after it is registered usingC:Program Files (x86)Common FilesArcGISinESRIRegAsm.exe) to ArcMap and the tool will be available in the customize dialog, place it on a tool bar and your events should fire. Com types are being replaced by addins which means this code will definitely have a limited lifespan before you need to create an addin just to get it to work.

How to remove an item from my context menu?

I installed a program called Aptana Studio 3. It's added a menu item to my context menu(if I right click I see an option "Aptana Studio"). How can I manually remove this from my menu?

I have searched high and low and can't find a way to remove the Aptana stuff from my Firefox. Perhaps it's because I'm on Firefox 4. What I have done is I've gone into

I then searched for Aptana and there are 3 items that are listed. I've attached a screenshot.

My question now is how can I remove these items because perhaps this will remove the option from my menu.

Depends on how you define a context switch.

In the traditional sense it means saving all registers/cpu state changing the mmu state and then going elsewhere to answer the call and after it's finished restore everything.

It is not necessary to save all state for all operation. For example a mutex lock needs to check no other thread/process is using the mutex and then set it marked.

In a single-core cpu you can do that by ensuring that no interrupts happen during the mutex operation and then by virtue of being passed the point of interrupts being disabled you know you are the only one touching the mutex at that point. The only way another thread could be in the middle of the lock operation is if it re-enabled interrupts or it context switched out because the mutex was already taken. Both scenarios and where in the code they can happen are under full control of the kernel code.

Having said all that, saving the context isn't that expensive. The more expensive part is all the cache misses that will happen as the instruction flow goes to cold memory.

There are different locations where a context menu can fire (if you right click on a folder you'll get different options then if you right click on a file). The type variable controls this behavior in the library, and you can reference this table to determine the type :

Name Location Action
FILES HKEY_CURRENT_USERSoftwareClasses*shell Opens on a file
DIRECTORY HKEY_CURRENT_USERSoftwareClassesDirectoryshell Opens on a directory
DIRECTORY_BACKGROUND HKEY_CURRENT_USERSoftwareClassesDirectoryBackgroundshell Opens on the background of the Directory
DRIVE HKEY_CURRENT_USERSoftwareClassesDriveshell Opens on the drives(think USBs)

You can find out (part of) the context you are in by:

This in itself will help you determine where you are (contextually speaking). Determining where you need to be is another story. Each operation that you do in Blender requires that you be in the correct context. The way I think of context is, what window (context) does the operation occur in when you do the action by clicking with the mouse? You also may need to refer to the API or other info, but one thing I found to help is knowing what contexts exist!

And easy way to find that out is:

This throws a useful error:

TypeError: bpy_struct: item.attr = val: enum "?" not found in ('VIEW_3D', 'TIMELINE', 'GRAPH_EDITOR', 'DOPESHEET_EDITOR', 'NLA_EDITOR', 'IMAGE_EDITOR', 'SEQUENCE_EDITOR', 'CLIP_EDITOR', 'TEXT_EDITOR', 'NODE_EDITOR', 'LOGIC_EDITOR', 'PROPERTIES', 'OUTLINER', 'USER)

Well, there are your context types . . .

I don't know if that has successfully answered your question, because I do not write plugins, yet, but it might has something that helps get you past the current issue in it.

Here is the context of the advice I gave before for this quote

To avoid getting things I did not want, I installed the KDE apps explicitly.

pacman -S plasma-workspace plasma-desktop systemsettings kwin kdelibs kdebase-kdialog kde-gtk-config systemd-kcm kdesu krename kate kwrite retext kdegraphics-okular xdg-utils plasma-nm drkonqi kdeplasma-addons kscreen ksshaskpass ksysguard kwallet-pam user-manager firefox kompare kdiff3 gtk3 gimp gimp-ufraw xsane xsane-gimp gwenview qt5-imageformats kimageformats kcharselect kcolorchooser kdebase-keditbookmarks dolphin dolphin-plugins kdeutils-kdf ark spectacle pulseaudio plasma-pa kdemultimedia-kmix pulseaudio-alsa pavucontrol libdvdcss kwalletmanager kio-extras konsole kdebase-lib gnupg kdeutils-kgpg gnome-themes-standard xclip print-manager

Installing packages explicitly like this would leave your system with deprecated packages that could cause you problems in the future. e.g here in your case

kdebase kdegraphics-okular kdebase-keditbookmarks kdeutils-kdf kdemultimedia-kmixkdebase-keditbookmarks kdeutils-kdf kdemultimedia-kmix kdebase-lib kdeutils-kgp

So, to install KDE properly, you should either use -meta packages or groups. I highly recommend the -meta package approach because groups can’t enforce installation of packages when being extended or modified.

If you insist in tuning your system, use the groups and you have to do the cleaning yourself but still much better than the very low level you are currently doing.

Data alone is just noise.

At the bottom of the pyramid you have all of your raw data -- reams and reams of unprocessed, computer-friendly facts residing in databases and spreadsheets. It&rsquos a potential goldmine, sure, but every mine contains far more dross than ore.

In an era when it takes just two days to generate as much data as humankind amassed from the dawn of civilization to 2003, many enterprises are in danger of suffocating under the weight of their own data and lack the employees with the advanced data analysis skills to mine it.

&ldquoData is becoming ever more central to business decision making, and is extending to people with little or no training in data use and interpretation,&rdquo says business consultant Barry Devlin.

Data today arrives in various formats that is then stored across multiple complex systems or spread out across different departments. That&rsquos why three-in-four firms say they want to be data-driven, but fewer than 30 percent are actually successful at it.

Raw data has limited business value without context since it fails to give employees the background they need to understand what it is, when it happened, where it happened, what else was going on, and so on. Throwing raw data at your employees doesn&rsquot allow them to leverage it to its full potential -- data must be processed for analysis or it will remain largely useless.

Replies Ƒ) 

Hi AryaFathi
Greetings! I am Vijay, an Independent Advisor. To start with, please execute following steps and let me know (You might need to do last point)

1. CTRL+SHIFT+ESC to start Task Manager
Locate Windows Explorer in Processes tab > Right Click > End Task
Now at the top File > Run new task > Put explorer.exe and Enter

3. Review below Microsoft help
Fix File Explorer if it won't open or start - https://support.microsoft.com/en-us/help/408737.

4. The cause for this problem can be third party shell extensions. To get rid of culprit extension, you can download ShellExView. ShellExView is a free third-party application that can be used to manage, disable and enable all of the shell extensions that you have on your computer

Launch ShellExView by right-clicking on the application named shexview and clicking on Run as administrator

You will be met with a list of all the shell extensions installed on your computer once the program is done compiling it. Once you see the list, click on Options > Filter by Extension Type > Context Menu.

In the newly compiled list, you will see entries that have pink backgrounds. All of these entries are shell extensions installed on your computer by third-party applications.

Hold down the Ctrl key and click on each of the “pink background” entries to select them.

Once all of the “pink background” entries have been selected, right-click on them and click on Disable Selected Items to disable all of them.

Click on Options > Restart Explorer. Try right-clicking on your Desktop, and File Explorer should no longer crash.

Once you have fixed the problem, next comes identifying the culprit and disabling it for good. To do so, you are going to have to:

Right-click on any one of the “pink background” shell extensions that you disabled and click on Enable Selected Items to enable it.

Click on Options > Restart Explorer, right-click on your Desktop and see if File Explorer

If File Explorer does not crash, keep on repeating steps 1 and 2, enabling a different third-party shell extension every time, until File Explorer crashes and you start experiencing the problem again.

The third-party shell extension you enabled just before the problem returned is the culprit. You can go ahead and enable all of the “pink background” shell extensions you disabled except for this one as this one is the culprit. Keep this shell extension disabled for good – in fact, it is recommended that you uninstall the third-party application that installed this shell extension on your computer altogether. (Source - https://appuals.com/file-explorer-crashing-afte. )

5. If nothing works, I would recommend that you perform a Windows 10 repair upgrade. Repair upgrade fixes all Windows errors and retains all files, applications and settings. (You will not lose any data while backup is a good idea) Below is a good guide to perform repair upgrade

Do let me know if you require any further help on this. Will be glad to help you.

Disclaimer - This is a non-Microsoft website. The page appears to be providing accurate, safe information. Watch out for ads on the site that may advertise products frequently classified as a PUP (Potentially Unwanted Products). Thoroughly research any product advertised on the site before you decide to download and install it.

Prediction is simply saying that something will happen in the future. This may be based on scientific knowledge, experience or something else.

Scientists predicted the earthquake a decade ago.

He predicted he would fail the test because he did not answer half of the questions.

The fortune teller predicted he would get a miracle.

This effect is known as gravitational lensing and is one of the predictions of Albert Einstein's general theory of relativity.

There is no humbuggery or New Age flavor that attaches to scientific predictions. They may be proven wrong, but they are always subject to the scientific method.

Not only can you use it, but 'prediction' is the usual word to describe an implication of a theory - in the future or in the past.

The word "predict" is routinely used in science, though with a a slightly different meaning than the conventional use.

Normally when we say "predict" we mean that you believe that something will happen in the future. This could be based on some sort of reasonaed analysis: "Noted economist Dr Jones predicts that the stock market will go up 500 points by August". Or it could be supernatural: "The prophet Daniel predicted the fall of Babylon by divine revelation."

The word can be used with this meaning in science. "The geologist predicted an earthquake."

But it can also be used to mean that something is an implication of a theory. "A prediction of Einstein's theory of relativity is that any attempt to measure the speed of light in a vacuum will always give the same result, regardless of the relative motions of the source and the observer."

I say that this is slightly different from the conventional definition because the scientist is not necessarily saying that he believes the prediction to be true. He is saying that IF the theory is true, THEN this result will happen. In real life scientists often formulate theories in which they have little or no confidence, just to have some standard to test against. For example, a scientist might say, I have no idea whether a magnetic field will affect this chemical reaction. So let's formulate the theory that the rate at which the precipitate forms is unaffected by magnetic fields. We could test this theory by, etc.

Scientists routinely talk about the "predictions of a theory", that is, what experimental results we should expect to see if a theory is true. This is commonly used as a test of the validity of a theory: if when we perform the experiments, the predictions turn out to be false, then the theory must be modified or discarded. If the predictions turn out to be true, then the theory may be true. (We can't say that the theory is proven true: it might be that it only appears to be true in some cases.) If we cannot make any testable predictions based on a theory, that theory is said to be weak. It is often said that such-and-such is not a legitimate scientific theory because it does not lead to any testable predictions.

I say this is different from the convetional definition. Someone might reply that people often doubt predictions, that, for example, many people doubt the predictions of psychics. But the difference is that in conventional use, the person making the prediction believes that it is true. (Or at least, claims to believe that it is true. Even the worst charlatan psychic doesn't say, "I don't really believe that this prediction is true. It's just something I made up for fun.")

How context will move IoT from smart devices to smart community

'When a common language is developed for devices and applications to talk to each other without human intervention, the opportunities of the IoT will be multiplied'.

With the recent explosion of news and updates concerning the Internet of Things (IoT) and its yet-to-be-fully-determined potential to drastically change the way the world operates, people could be led to believe this technology has barely arrived on the technology landscape.

However, sensors and monitors have been available for decades and have been widely deployed to achieve more efficient systems in industrial supply chains. It is only now from the scale and connectedness of devices that it&rsquos actually being referred to as the "Internet of Things&rdquo.

So with this shift comes a need for streamlined standards and the ability for machines to be able to derive context from a continuous stream of data. As many stakeholders now architect technology platforms to embrace the IoT, they stand on a tipping point.

Enormous data

According to analyst firm IDC, worldwide spending on the IoT will reach nearly $1.3 billion in 2019, from$698.6 billion in 2015. This growth is happening because the thresholds have been lowered as devices become cheaper, more practical and more powerful.

Things have become smarter in the age of artificial intelligence, algorithmically-driven software power and big data. With all these factors combined, an enormous amount of data is being generated, yet businesses are being held back by their inability to generate useful insight from that data.

But the IoT is proving its power in some areas. Among the earliest areas to flourish were sensors for determining weather patterns and traffic sensors to monitor speed and traffic density and help people plan journeys.

Additionally, the agriculture and food production industries make much use of radio-frequency identification (RFID) tags for livestock. Although many of these initiatives are limited in geographic scope, they have seen a fair level of standardisation &ndash and this has been a key facilitator towards progress.

Communities of Things

The next step is to band these isolated schemes into communities of things. However, for that to become a reality, more work needs to be done in terms of how communities and the devices within them interact and how information is shared.

As new broader, more standardised platforms now emerge, organisations will quickly get to a point where actionable insights can be derived from information streams the IoT feeds to them. This requires context to be derived from data and interactions.

An example of why context is so important can be seen with smart labelling for clothes. With smart labels, an item of clothing could pass data from its label to a washing machine so that the washing machine could select the correct setting according to what it is made of. This underpins the need for devices and things to be able to pass information &ndash or context &ndash between each other.

Machine learning

In order for the IoT to become widespread, machines and devices must be capable of making decisions autonomously. This depends on their ability to derive context through machine learning. When this is achieved, it will allow humans to be taken out of the equation, driving value and cost efficiencies.

For instance, in agriculture, sensors can already derive context from RFID tags fitted to animals to alert owners when an animal has strayed from pasture, cutting down on the time spent monitoring herds.

If this can be further matured in the manufacturing, energy, and oil and gas industries, sensors and other devices that collect data will be able to correlate information to better manage inventories and allow for the preventive maintenance of expensive equipment without onsite workers.

Further, cities would become smarter, replacing the current isolated systems that cannot efficiently talk to each other and block services from being managed in an integrated manner.

All of this relies on context &ndash allowing machines and devices to infer meaning from streams of data from disparate systems. When a common language is developed for devices and applications to talk to each other without human intervention, the opportunities of the IoT will be multiplied.

When this happens, society will be able to move beyond the concept of smart consumer devices to the smart community.

Sourced from Giri Fox, director of technical services, Rackspace